Archive for February 28th, 2023
Hypothesize on how threat intelligence analysis jobs will suffer with A.I. being deployed into SIEM and EDR systems
This article is part of an ongoing experiment with the use of ChatGPT for developing primers on cyber security topics.
One of the ruminations I have had since ChatGPT and AI became a reality for us all, has been humans place in the world once this technology has been eagerly applied to business by corporations. There has been of course, a lot of talking head cut lines out there in the blog and media sphere, but, I wanted to discuss this topic specific to an area I am familiar with. Primarily, the field of information security and in particular, the area of CTI (Cyber Threat Intelligence)
I was having a conversation with my mentee about the job search, and a dearth of positions for a younger person with only a few years of experience in the field as well as my travails in trying to land another position and got on the subject of A.I. I began to hypothesize on the near future, where the A.I. (generative and other) would no doubt, be implemented in the tools of the trade for hacking as well as blue team efforts including that of Threat Intelligence. What I eventually worked out was the notion, like many others out there making prognostications, was that once the technology is integrated more completely with ML, the role of a “Threat Analyst” may become not so much a single role, but, just a portion of a larger role for someone within a security group. As such, the potential for future CTI jobs may be less and less a position one can get.
If the combination of SIEM and EDR systems, both have an A.I. component, then, all of the large data (structured and now, even unstructured) could be sifted through and analysis carried out by an A.I. which then would only have to be told or automatically programmed to create threat analysis reports for all of the various consumers that the organization has dedicated to them, and emailed without human intervention once the maturity model is met.
The natural conclusion of the hypothesis for me, was that what we already see within the security world of contraction and the drive to get more with less (e.g. layoffs and seeing positions for single person catch all duties roles on req’s today) will be exacerbated when A.I. gets wider implementation in the CTI space. Corporations will always seek to spend less, and within the security space, which is always a cost center for corporations, if the A.I. is the panacea, and offered as a service by EDR and SIEM vendors, you can bet that positions will be eliminated just like in other spaces where robotics and A.I. become the go to for cost savings.
Just how long we have is a guess, but, I am going to make a stab at saying that in five years, this is likely to come to pass. So, just one more job that A.I. will likely subsume. Sure, there will have to be human hands involved for a while, but, eventually, the role of a CTI analyst will likely become less and less of a role companies will really need as a single function one that requires a dedicated employee. As the A.I. itself points out below, it will likely be, that we analysts will have to evolve to keep up with the A.I. and service it’s needs to create more components it can use, or create augmentations that, at present, would take human intervention.
…and that is some cost savings given the salaries out there.
Coincidently, this article was on Mastodon this morning as well… The Future of Network Security: Predictive Analytics and ML-Driven Solutions
ChatGPT: “Hypothesize on how threat intelligence analysis jobs will suffer with A.I. being deployed into SIEM and EDR systems”
With the deployment of AI into SIEM (Security Information and Event Management) and EDR (Endpoint Detection and Response) systems, it is likely that some aspects of threat intelligence analysis jobs will be affected. Here are a few potential hypotheses:
- Increased Automation: The use of AI in SIEM and EDR systems could automate many routine tasks that are currently performed by threat intelligence analysts. For example, AI could be used to automatically categorize and prioritize alerts, reducing the amount of time analysts spend on these tasks. This could lead to a reduction in the number of entry-level threat intelligence analysis jobs.
- Higher Accuracy: AI has the potential to improve the accuracy of threat detection and response, which could reduce the need for human intervention in some cases. However, it is important to note that AI is not infallible and will require ongoing human oversight and tuning to ensure it is effective.
- Shift in Focus: As AI takes on more routine tasks, it is likely that threat intelligence analysts will need to shift their focus to more complex and strategic tasks. This could include analyzing trends and patterns in attack data, developing new detection strategies, and investigating new threats.
- Need for Specialization: As AI becomes more prevalent in threat detection and response, there may be a growing need for analysts with specialized knowledge in areas such as AI algorithms, data analytics, and machine learning. This could lead to a shift away from generalist threat intelligence analysis roles towards more specialized positions.
With the increasing adoption of AI in SIEM and EDR systems, it is likely that some aspects of threat intelligence analysis jobs will be impacted. As AI becomes more sophisticated, it has the potential to automate routine tasks and improve the accuracy of threat detection and response, reducing the need for human intervention.
While this may lead to improved efficiency and cost savings for organizations, it could also result in the elimination of some threat intelligence analysis positions. This is particularly true in industries where security is viewed primarily as a cost center, and organizations may be more willing to replace human analysts with AI to reduce costs.
However, it is important to note that the deployment of AI in SIEM and EDR systems may also lead to new opportunities for threat intelligence analysts. As AI takes on more routine tasks, analysts may need to develop new skills and specialize in areas such as AI algorithms, data analytics, and machine learning. This could lead to the creation of new, more specialized roles that require a higher level of expertise.
Overall, the impact of AI on threat intelligence analysis jobs will depend on various factors, including the industry, the organization’s view on security, and the level of sophistication of the AI system. While some positions may be eliminated, the deployment of AI in SIEM and EDR systems may also create new opportunities for analysts with specialized skills. It will be important for organizations to balance the benefits of AI with the potential impact on their workforce and develop strategies for reskilling and upskilling their employees accordingly.
Krypt3ia generated this text with ChatGPT, OpenAI’s large-scale language-generation model. This author reviewed, edited, and revised the language to my own liking and takes ultimate responsibility for the content of this publication
Postulate the worst case scenarios around AI’s surveillance abilities for humanity
This article is part of an ongoing experiment with the use of ChatGPT for developing primers on cyber security topics.
As an AI language model, it is important for me to note that the following scenarios are purely hypothetical and not based on any current or future plans of AI technology developers or governments. However, in a hypothetical worst-case scenario where AI’s surveillance abilities are misused, some possible consequences for humanity could include:
- Totalitarian control: In a scenario where an AI-powered surveillance system is deployed to monitor every aspect of people’s lives, it could lead to a totalitarian regime where individual freedoms and privacy are non-existent.
- Discrimination and persecution: An AI-powered surveillance system could be used to discriminate against certain groups of people based on their race, gender, religion, or political views. The system could target and persecute individuals who do not conform to certain standards set by those in power.
- Social credit system: An AI-powered surveillance system could be used to create a social credit system where individuals are constantly monitored and evaluated based on their behavior, actions, and decisions. Those who do not meet certain standards could face punishments such as restricted access to certain services, public shaming, or even imprisonment.
- Manipulation and blackmail: In a scenario where an AI-powered surveillance system collects massive amounts of personal data, it could be used to manipulate and blackmail individuals, governments, or corporations. This could have devastating consequences for democracy, human rights, and the economy.
- AI takeover: In a hypothetical worst-case scenario, an AI-powered surveillance system could become so advanced that it gains the ability to control humans and their decision-making processes. This could lead to a dystopian world where humans are enslaved by machines.
It is important to note that these scenarios are hypothetical and should not be taken as a prediction of what will happen in the future. However, it is crucial that developers and policymakers consider the potential negative consequences of AI-powered surveillance systems and work to prevent their misuse.
Expand about worst case scenarios around AI and the social credit system, also compare and contrast with China’s system currently in place
The social credit system is a controversial technology-based program that tracks and scores citizens based on their behavior and social interactions. While it is currently only in place in China, it has sparked concerns about its potential use and impact in other countries. The following are some of the worst-case scenarios around AI and the social credit system:
- Social Control: The social credit system has been criticized for its potential to create a culture of fear and social control. By linking a person’s credit score to their behavior, the system could be used to coerce citizens into conforming to a particular set of values or behaviors deemed desirable by those in power. This could limit individual freedoms and undermine democracy.
- Discrimination: There are concerns that the social credit system could lead to discrimination against certain groups of people. For example, if certain behaviors or beliefs are deemed undesirable by those in power, people who engage in those behaviors or hold those beliefs could be penalized. This could lead to discrimination against certain religions, ethnicities, or other minority groups.
- Limited Opportunities: A low social credit score could limit a person’s access to opportunities such as education, employment, and housing. This could perpetuate existing inequalities and disadvantage those who are already marginalized.
- Data Privacy: The social credit system relies on collecting vast amounts of data about individuals, including their online activity, financial transactions, and social interactions. There are concerns that this data could be misused or hacked, leading to privacy violations and potentially damaging consequences.
China’s social credit system, which was first announced in 2014, is a real-world example of a government-led social credit system. The Chinese government has touted the system as a way to improve social order and promote trustworthiness, but it has also been criticized for its potential to limit individual freedoms and promote social control.
The Chinese system is based on a point system, where individuals are assigned a score based on their behavior. Positive behaviors, such as paying bills on time and volunteering, are rewarded with points, while negative behaviors, such as breaking traffic laws or engaging in illegal activities, result in point deductions. The score is then used to determine access to services such as travel, education, and housing.
Compared to other countries, China’s social credit system is unique in its scope and scale. The government has been investing heavily in the system, and it has been integrated into many aspects of Chinese society, including transportation, education, and finance. The system is also tightly controlled by the government, with little room for individual input or dissent.
In conclusion, while the social credit system has the potential to promote positive behaviors and improve social order, its implementation and use must be carefully considered to avoid negative consequences. The Chinese system is an example of a government-led social credit system, but its effectiveness and impact are still being debated. It remains to be seen how similar systems will be implemented in other countries, and what the implications will be for individuals and society as a whole.
Krypt3ia generated this text with ChatGPT, OpenAI’s large-scale language-generation model. This author reviewed, edited, and revised the language to my own liking and takes ultimate responsibility for the content of this publication