Breaking News

Experts Discuss the Role of AI in Cyber ​​Security at MapleSEC




Experts Discuss the Role of AI in Cyber ​​Security at MapleSEC 

In the August 4 episode of MapleSEC 2021, experts didn't use words to discuss cybersecurity. In the event, he reiterated that no one is safe from becoming a victim, especially given that remote working has given malicious actors a wider surface of attack.

But what should organizations do? For starters, ask the right questions, ITWC Chief Information Officer Jim Love said in opening remarks.

"Fundamentally, what do we need from our cyber security systems?" Love said. "I think we need to help those under tremendous pressure to detect threats earlier, make faster decisions, and provide better investigations."

George Nastassi, associate partner at IBM Canada's Threat Management Cloud and Cognitive Software, illustrates the complexity of the security landscape in broad strokes.

"Things have progressed over the years with the adoption of cloud-facing multi-hybrid cloud environments," Nastassi said. "And now we have AI, quantum and IoT security. That means more interconnected IT systems, more things to see and protect. And with that comes more inherent threats to deal with."

AI is integral to defending against modern threats
IT leaders need the right tools so they can respond quickly and accurately. As Nastassi mentioned, the traditional way of manually monitoring and investigating incidents does not reduce this further in today's threat landscape. As cyber attacks evolved, so did the number and complexity of solutions designed to counter them.

Many security solutions today use AI to spot trouble before an attack begins. Nastassi gives a poignant example with threat detection. On average, 50 to 60 percent of security incidents are potentially false positives or benign, he said. Instead of working for hours on non-issues, machine learning can speed up the process by running these events through an algorithm, saving massive amounts of time for high priority items.

In addition, AI can match patterns and pull relevant information in a matter of seconds for in-depth analysis.

But Love cautioned that although this is effective, leaders need to make informed decisions about solutions that are advertised as AI enabled.

“AI is a hot topic of discussion at the moment. And not everything labeled AI should really be AI, especially with the increasing number of companies jumping on the AI ​​bandwagon," Love explained. "Most of today's AI offerings don't really meet AI testing. They can use technologies to analyze the data and give the results a certain set of results, but that is an algorithm, not an AI. It's not coming close to reproducing the cognitive abilities employed to automate real AI tasks."

AI goes both ways
While AI is helping organizations avoid catastrophic harm, threat actors are applying AI in nefarious ways. Nastassi said cybercriminals are using AI and automation to easily carry out large-scale attacks. Email-based attacks, social engineering, and defeating facial recognition are just some of the threats AI is bleeding.

"Not only can I breach data from one company, I can now combine it with some publicly available information," said Adam Frank, chief technology officer for security intelligence at IBM Canada. "The content we're posting on our social media, LinkedIn, things like that. We can piece that information together, or criminals can piece it together, and then use it in a more targeted way." when they see their attacks being targeted against organizations."

Frank explained how AI has reduced the fatigue involved in creating target profiles, allowing threat actors to more easily impersonate a figure.

There is always a game of cat and mouse between cybercriminals and security researchers. Now there are many ways in which profiling techniques can be defeated. For example, security solutions can now detect deviations in user behavior and raise a flag when he or she takes an action outside of that criterion.

Transparency and choosing the right data is still the biggest hurdle for AI
It has taken a long time for AI engineers and researchers to choose the right data. Just because there is an abundance of datasets doesn't mean they are all suitable for use in training. Ali Dehghantanha, a dangerous intelligence researcher at the University of Guelph, pinned data filtering as the weakest link in successfully training AI.

"If you train it the wrong way, it's really going to do the wrong thing forever," Dehghantan said. “Unfortunately, a lot of data sources that are biased towards us are incomplete. And when you are training an engine that has learned to make these decisions automatically, you are exacerbating the problem.

No comments