Information Security Trends 2024

Information Security Trends 2024

Sergio Bertoni, Lead Analyst at SearchInform, shares his thoughts and predictions on the key trends in information security for the year 2024.

LACK OF INFORMATION SECURITY EXPERTS ON THE MARKET. IS THE PROFESSIONAL RETRAINING THE ANSWER TO THE ISSUE?

One major problem related to cybersecurity, which threatens organizations around the Globe is the severe lack of information security experts on the market. What’s more, according to the ISC2 CYBERSECURITY WORKFORCE STUDY the global workforce gap grew an additional 12.6% this year. SearchInform’s research revealed, that in 2022 1/3 of executives admitted severe lack of IS officers.
 
The lack of experts issue is aggravated by the fact, that there are not so many opportunities to expand the IS officers staff in short terms. First of all, to become a specialist, the person has to study for a few years, what’s more, to become a real expert, it’s required to gather plenty of hands-on experience (in other words, work at least for a few years). However, you mustn't put information security related issues off. The IS officers are required right now, it’s impossible to wait for years, until they are ready to proceed to work. 

One option which can help to solve the issue is to perform professional requalification. It may be useful to develop and implement the plan of requalification some employees, to provide them with required knowledge and competencies to make sure, that they are capable of dealing with IS related issues. Another option is to hire some external specialists and retrain them, enhance their knowledge and make sure, that they have also acquired the required competencies. The tricky moment is that in case people, who are in charge of ensuring organization’s protection aren’t experienced (if they’ve just finished the retraining course, they are obviously aren’t very experienced), it will affect the quality and reliability of protection. 

In order to mitigate the risks associated I can recommend a few measures. First of all, implement advanced protective software. Choose the solutions by reputable vendors, test the solutions before purchase. It’s crucial that the vendor stays in touch with the customer after the solution is purchased. I’d recommend to choose vendors, which can assist during the whole period of software usage. This means, that the vendor’s experts should assist customers at least with: 

  • Protective software implementation; 
  • Customization of systems according to the client’s requirement; 
  • Elimination of technical issues, related to the protection software, which may occur.

This helps to use the solutions efficiently and max out the benefits. In current circumstances, if an organization lacks resources and expertise to protect itself, the best choice may be to contract a Managed Security Software Provider. The MSS company should take all the tasks, which are basically solved by the in-house experts. It’s the ideal scenario, if the MSS service provider offers its self-developed protective solutions, which are administrated by outsourcing analytical experts. If so, the organization, which doesn’t have an onboard information security officer and experiences difficulties in hiring such expert, becomes protected by experienced specialist and advanced protective solutions.

DATA CLASSIFICATION: THE OBLIGATORY ADDING OF SPECIFIC ATTRIBUTES TO PERSONAL DATA KEPT FOR DETECTION OF DATA LEAK SOURCE.

When discussing the future of data privacy it’s required to mention that soon the problem of data leak source should turn out to be a crucial one. The volume of data leaked and published in the darknet is tremendous. This refers to personal data as well. Quite often, personal details and other confidential data on people is compiled and duplicated in a number of databases. This means, that quite soon it’ll be simply impossible to detect the data leak source according to the data type solely.

In other words, if somebody tries to find data on himself/herself on the internet and obtains a file, containing his/her passport details, email, address etc. the question arises – where does this data originate from? I think, that there is the high probability, that soon in some countries it will be prohibited to keep confidential data (personal details) without adding some unique attributes to it. If an organization ignores the requirement, it will be prohibited for the organization to keep data at all (currently, such legislation isn’t adopted in any country).

To make my idea clear, let me provide an example. A personalized phishing attack on a person was performed, the intruders used the personal details to customize the attack. It’s known, that the user’s details were leaked a few times: from the liquor store’s database, from the marketplace’s database and from the governmental body’s database. The most part of the data leaked from all these sources is the same, however, there aren’t any specific attributes, which can help to identify the data leak source (in other words, which particular leaked database was obtained and used by intruders).

The question arises: what these unique attributes may be and how they may be implemented in practice?  In case with deepfakes, for example, a watermark may be added. But what about case with personal data? This is a crucial question, however, it’s quite difficult to answer. I, personally, believe, that it’ll be required to implement specific mathematical methods, as the data isn’t unique itself. Thus, it’s required to refer to scientists as well and elaborate some specific method which will allow to add unique attributes to the confidential data, otherwise, the fact of personal data existence will be at stake.

DEVELOPMENT OF LEGISLATION, AIMED AT REGULATION OF INFORMATION PROCESSING USING AI.

Basically, this issue is the big legal gap now. By default, it’s considered that the user of the AI powered tool is fully responsible for all issues, which may arise  (please note, that the responsible party isn’t the tool developer and even not the provider, instead, the person, who’s  running the process is in charge). 

The company, which is the AI driven tool developer cannot be held legally responsible, because the company only develops the mechanism. Obviously, not the mechanism, nor the mathematics models may be held legally responsible. AI tool can’t be a defendant in court. 

Nowadays, people all over the world actively use the AI driven tools, but, in fact, no one understands precisely all the AI powered tools’ work principles. This is especially dangerous in case with personal data processing. Let me explain this risk with the help of the following example.

Case study. Let’s imagine, that there is a local server with ChatGPT. The user makes the server to scan the Internet to train the neural network to deal with some specific task. While scanning the Internet, the AI obtains plenty of personal data. Formally, the user, who set the task for AI turned into the personal data operator. At the same time, the user even didn’t request the tool to search for personal details, what’s more, he didn’t know, that AI gathered personal data. The technology decided itself, that this data is required for dealing with the task set. This dilemma is quite dangerous and must be solved.

Thus, soon, some methods which will help to make the situation clearer must be developed and, probably, new regulatory acts will be adopted. One of the most efficient method may be to develop specific legislation, which can help to regulate the sphere appropriately, make the required clarifications, allocate each parties’ responsibilities in the appropriate manner to make the sphere more safe and transparent.

As a recommendation for all employees, I’d like to suggest the following: first of all, try to minimize the potentially risky operations. Below is the list of a few basic measures, which should be implemented by all users with no exception:

  • Avoid keeping data on unverified cloud services.
  • Don’t make the requests for AI driven tool, which potentially can lead to some unwanted operations by the system.
  • Don’t upload any confidential data to AI operated tools.

THE SPECIFIC AI DRIVEN SOLUTION FOR INFORMATION NOISE FILTRATION.

Probably, in the nearest future the specific AI solution, which filtrates the information noise and provides user with only truthful and unique information will be developed.

Basically, there is a high demand for such solution to be developed, however, it still doesn’t exist. In my opinion, the main idea is the following – the tool filtrates opinions and rechecks facts, this way, the solution helps to distinguish truthful/false information, as well as unique/not unique information on the Internet. Technically, it can be developed. The technology in some ways should be similar with the ChatGPT, it must be able to consider the context, perform search on the Internet or in some data sources to find the proof of the fact or make the rebuttal. It’s important to understand that the overwhelming majority of information on the Internet is the information noise. In other words, this information doesn’t have any sense. 

To illustrate this point, let me share an example. Some news was published. The news is truth, everything’s fine with it. Users have added 500,000 comments to this news. All these comments don’t contain any new/unique thoughts or data. However, even if just one comment, containing some new/original information is added, it’s required to obtain it in the overall comments flow. All in all, if a user simply expresses his/her personal opinion, without making appropriate references, this is the information noise. The system needs to know how to differentiate information in order to define whether something is true or not.

Let’s examine one more example: some news that a major corporation closed was published. The situation is complicated by the fact that after numerous reprints the original sense of news vanishes totally. Thus, the tool should check, whether the information is true or not. In fact, this is absolutely routine work, which is not very intellectual, but it’s required to recheck the data in large number of sources.
 
So, to sum up: such a tool must be capable of filtration of information, and after analysis provide users with only such information, which is meaningful, truthful and brings some new information. In case with the abovementioned example (about the user’s comment) if the comment is unique, it can be shown to the user (even if it is not 100% true), but identified as unique.

Votes: 0
E-mail me when people leave their comments –

SearchInform is a 100% private company that develops risk management products being one of the industry leaders. More than 4,000 companies across 20+ countries are SearchInform clients. The development team has been creating search technologies for unstructured data since 1995 and started developing information security solutions in 2004. Today, the team has products and services for comprehensive protection against insider threats at all levels of corporate information systems.

You need to be a member of Global Risk Community to add comments!

Join Global Risk Community

    About Us

    The GlobalRisk Community is a thriving community of risk managers and associated service providers. Our purpose is to foster business, networking and educational explorations among members. Our goal is to be the worlds premier Risk forum and contribute to better understanding of the complex world of risk.

    Business Partners

    For companies wanting to create a greater visibility for their products and services among their prospects in the Risk market: Send your business partnership request by filling in the form here!

lead