Google uses AI to anticipate your web browsing needs

New machine learning developments for Chrome aim to predict what the user wants in real time.

Google has announced a number of machine learning enhancements for its Chrome web browser to improve security and add custom features.

Chrome already uses machine learning to make images more accessible to people with vision problems or to generate real-time captions on videos.

In March, the browser received a new built-in phishing detector that uses machine learning (ML). Google said this new model identifies 2.5 times more potentially malicious sites and phishing attacks than the previous model.

The tech giant now plans to use AI to improve how the web browser handles permission requests for notifications.

“On the one hand, page notifications help deliver updates to sites that interest you; on the other hand, notification permission prompts can become a nuisance,” said Google’s software engineer. Tarun Bansal.

The next update will have a machine learning model running in Chrome that will identify permission notifications and silence those that are unlikely to be granted based on previous user interactions.

A planned AI improvement will adjust the Chrome toolbar in real time based on the most useful action at the time, such as sharing a link or using the voice search feature. Google said this adjustable toolbar can be customized manually.

“Our goal is to create a browser that is truly and continually useful, and we’re excited about the possibilities that ML provides,” Bansal said.

Google also launched an update to its language identification model, which detects the language of a visited website and predicts whether it needs to be translated for the user. Bansal said Google is seeing “tens of millions of successful translations every day” from this update.

Machine learning cyberattacks

While machine learning offers benefits for web browsing, it can also be used as a tool by hackers to launch difficult-to-prevent cyberattacks. These machine learning-assisted attacks are particularly robust and poorly understood due to the complex algorithms involved.

One such documented attack, described by MIT researchers as a “state-of-the-art” website fingerprinting attack, has been reproduced in detail to allow for further study.

“One of the really scary things about this attack is that we wrote it in JavaScript, so you don’t have to download or install any code,” said computer scientist Jack Cook, lead author. from study. “All you have to do is open a website.

The studied attack was found to be extremely effective in determining a user’s browsing behavior. In the case of a computer running Chrome on macOS, it was able to identify websites visited by the user with an accuracy of 94pc. Across all browsers and operating systems tested, the researchers found an accuracy of over 91pc.

“Someone could embed that into a website and then theoretically be able to spy on other activity on your computer,” Cook said.

The MIT team’s nearly identical version of this machine learning-assisted side-channel attack helped them better understand how it works and how to prevent it. They were surprised to find that deepening their knowledge of these complex attacks revealed fairly simple solutions.

In order to counter the cyberattack, the team created a browser extension that pinged random websites, adding noise to the data and making it harder for an attacker to decode the signals. This saw attack accuracy drop to 62pc, but it also impacted computer performance.

As a second countermeasure, they modified the computer’s timer to return values ​​slightly different from the actual time. This, they explained, made it much more difficult to monitor user activity and reduced the accuracy of the attack to just 1pc.

“I was surprised how such a small mitigation like adding randomness to the timer could be so effective,” Cook said. “This mitigation strategy could really be implemented today. It does not affect how you use most websites.

The researchers plan to use their findings to develop an analytical framework for machine learning-assisted side-channel attacks.

“As researchers, we should really try to dig deeper and do more analytical work, rather than blindly using black-box machine learning tactics to demonstrate one attack after another,” he said. lead author Mengjia Yan, a fellow in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT.

“The lesson we’ve learned is that these machine learning-assisted attacks can be extremely deceptive.”

10 things you need to know straight to your inbox every weekday. Sign up for the brief dailythe summary of essential science and technology news from Silicon Republic.

Comments are closed.