Apple, Microsoft and Google are heralding a new era of what they describe as artificially intelligent smartphones and computers. The devices, they say, will automate tasks like editing photos and wishing a friend a happy birthday.

But for that to work, these companies need something from you: more data.

In this new paradigm, your Windows computer will take a screenshot of everything you do every few seconds. An iPhone will gather information from many applications you use. And an Android phone can listen to a call in real time to alert you to a scam.

Are you willing to share this information?

This change has important implications for our privacy. To deliver new personalized services, companies and their devices need more persistent and intimate access to our data than before. In the past, the way we used apps and extracted files and photos on phones and computers was relatively isolated. AI needs the big picture to connect the dots between what we do in apps, websites and communications, security experts say.

“Do I feel safe providing this information to this company?” Cliff Steinhauer, director of the National Cybersecurity Alliance, a nonprofit organization that focuses on cybersecurity, spoke about companies’ AI strategies.

All of this happens because OpenAI’s ChatGPT revolutionized the tech industry almost two years ago. Since then, Apple, Google, Microsoft and others have revised their product strategies, investing billions in new services under the umbrella term of AI. They are convinced that this new type of computer interface, which constantly studies what you are doing to offer assistance, will become indispensable.

The biggest potential security risk with this change arises from a subtle change occurring in the way our new devices work, experts say. Because AI can automate complex actions, such as removing unwanted objects from a photo, it sometimes requires more computing power than our phones can handle. That means more of our personal data may have to leave our phones to be processed elsewhere.

The information is transmitted to the so-called cloud, a network of servers that process requests. Once the information reaches the cloud, other people can see it, including company employees, bad actors, and government agencies. And although some of our data has always been stored in the cloud, our most personal and intimate data that was once only for our eyes (photos, messages and emails) can now be connected and analyzed by a company on its servers.

Tech companies say they have done everything they can to protect people’s data.

For now, it’s important to understand what will happen to our data when we use AI tools, so I got more information from companies about their data practices and interviewed security experts. I plan to wait and see if the technologies work well enough before deciding if it’s worth sharing my data.

This is what you should know.

Apple recently announced Apple Intelligence, a suite of AI services and its first major entry into the AI ​​race.

New AI services will be built into your fastest iPhones, iPads, and Macs starting this fall. People will be able to use it to automatically remove unwanted objects from photos, create summaries of web articles, and write responses to text messages and emails. Apple is also revamping its voice assistant, Siri, to make it more conversational and give you access to data across apps.

During Apple’s conference this month, when he introduced Apple Intelligence, the company’s senior vice president of software engineering, Craig Federighi, shared how it could work: Federighi received an email from a colleague asking him to postpone a meeting, but he was not That night he was supposed to see a play starring his daughter. Then his phone opened his calendar, a document containing details about the job and a mapping app to predict whether he would be late for the job if he agreed to a later meeting.

Apple said it was striving to process most AI data directly on its phones and computers, which would prevent others, including Apple, from accessing the information. But for tasks that must be sent to servers, Apple said, it has developed safeguards, including encoding the data using encryption and deleting it immediately.

Apple has also implemented measures so that its employees do not have access to the data, the company said. Apple also said it would allow security researchers to audit its technology to make sure it was living up to its promises.

But Apple hasn’t been clear about what new Siri requests could be sent to the company’s servers, said Matthew Green, a security researcher and associate professor of computer science at Johns Hopkins University, who was briefed by Apple about its new technology. . Anything that leaves your device is inherently less secure, he said.

Microsoft is bringing AI to older laptops.

Last week, it began launching Windows computers called Copilot+ PCs, which start at $1,000. The computers contain a new type of chip and other equipment that Microsoft says will keep your data private and secure. PCs can generate images and rewrite documents, among other new AI-powered functions.

The company also introduced Recall, a new system to help users quickly find documents and files they’ve worked on, emails they’ve read, or websites they’ve browsed. Microsoft compares Recall to having photographic memory built into your PC.

To use it, you can write informal phrases, such as “I’m thinking about a video call I had with Joe recently when he was holding a coffee mug that said ‘I love New York.'” The computer will then retrieve the video call recording containing those details.

To achieve this, Recall takes screenshots every five seconds of what the user is doing on the machine and compiles those images into a searchable database. Snapshots are stored and analyzed directly on the PC, so Microsoft does not review the data or use it to improve its AI, the company said.

Still, security researchers warned of potential risks, explaining that the data could easily expose anything you’ve ever written or viewed if it were hacked. In response, Microsoft, which had intended to release Recall last week, postponed its launch indefinitely.

The PCs come equipped with Microsoft’s new Windows 11 operating system. It has multiple layers of security, said David Weston, a company executive who oversees security.

Last month, Google also announced a suite of artificial intelligence services.

One of its biggest revelations was a new phone call scam detector powered by artificial intelligence. The tool listens to phone calls in real time, and if the caller looks like a potential scammer (for example, asking for a bank PIN), the company notifies you. Google said people would have to activate the scam detector, which is operated entirely from the phone. That means Google won’t listen to calls.

Google announced another feature, Ask Photos, which does require sending information to the company’s servers. Users can ask questions like “When did my daughter learn to swim?” so that the first images of her son swimming emerge.

Google said its workers could, in rare cases, review Ask Photos conversations and photo data to address abuse or harm, and that the information could also be used to help improve its Photos app. To put it another way, your question and the photo of your child swimming could be used to help other parents find pictures of their children swimming.

Google said its cloud was locked down with security technologies such as encryption and protocols to limit employee access to data.

“Our approach to privacy protection applies to our AI features, regardless of whether they are enabled on the device or in the cloud,” Suzanne Frey, a Google executive who oversees trust and privacy, said in a statement.

But Green, the security researcher, said Google’s approach to AI privacy seemed relatively opaque.

“I don’t like the idea of ​​my very personal photos and searches going to a cloud that is not under my control,” he said.

Share.
Leave A Reply

© 2024 Daily News Hype. Designed by The Contentify.
Exit mobile version