According to new research from IBM, 35% of companies now use AI in their business. From health care to manufacturing to government, data-driven organizations across all major industries are realizing the benefits of AI: automation of repetitive tasks, faster processing of data, improved accuracy, and more.
But as AI adoption becomes more widespread and advanced, business leaders must be mindful of a new set of complexities and issues that AI presents, including concerns over job loss, automated weapons, algorithmic bias, “deepfakes” (AI-generated fake videos), and – at the forefront of it all – data privacy violations. Gartner predicts that spending on data privacy efforts will reach $8 billion by the end of 2022.
Undoubtedly, AI can be a powerful asset; however, unchecked developments are increasingly fueling regulatory complications, especially when biased data sets come into play. Amid rising fears, it’s imperative to ask: How can companies use AI to drive greater efficiency in business results, all the while prioritizing data privacy and protection? Let’s start by defining what AI and data privacy are and how AI affects privacy.
What Is Data Privacy?
Data privacy is the process of ensuring that data about individuals and organizations remains confidential. It ensures that personal information is not collected without your consent and that it isn’t used for purposes other than those for which it was intended.
Although data privacy laws are designed to protect both personal and commercial data from being accessed without permission, the stringency can vary depending on where you live, and what kind of information is being protected. The European Union’s General Data Protection Regulation (GDPR), one of the strictest data regulations in the world, gives people more control over how their personal data is used. The GDPR touches upon seven principles:
- Lawfulness, fairness, and transparency
- Limitation of purpose
- Data reduction to a strictly need-to-know basis
- Accuracy
- Limiting data retention
- Integrity and confidentiality (security)
- Accountability
What Is AI?
Artificial intelligence (AI) is a sub-field of computer science that aims to create machines capable of intelligent behavior. Several types of AI exist:
- Artificial Narrow Intelligence: Systems that perform one task, such as playing checkers or recognizing facial features. Also known as “weak AI.”
- Artificial General Intelligence: Systems that can perform various tasks and apply the same skills from one situation to another. Examples include Toyota’s T-HR3 and Tesla’s Optimus humanoid robots. Capabilities include dancing, walking, and interacting, among other features. Also known as “strong AI.”
- Artificial Super Intelligence: A hypothetical form that harbors computer intelligence beyond the brightest human minds. Often depicted in futuristic books and films.
These impressive technological advancements do come at a high cost: AI requires huge amounts of contextualized, refined, and high-quality data to be collected, transformed, and processed. In the process of making AI more sentient, how are technologists, businesses, and governments ensuring data privacy and data security?
The Impact of AI on Data Privacy
According to a recent survey from Cisco, 60% of consumers are concerned about how organizations are using their personal data for AI, while 65% have lost trust in organizations because of their use of AI.
This loss of trust isn’t just based on science fiction, but rather real-world issues with AI bias that have made headlines in recent years:
- In 2019, investigators uncovered a racist AI system used in U.S. hospitals that favored white patients compared to black patients.
- Amazon’s AI-based hiring tool was discovered to have a considerable bias against women. The training data was based on the past decade and since most applicants were men, the training model wrongly asserted that men were better candidates.
Examples like the above have given rise to concerns over the misuse of AI and have left governments and society scrambling to accelerate legislation and policy reform, to both ensure responsible AI and curb data privacy threats. Some recent examples:
- The U.S. National Institute of Science and Technology (NIST) initiated workshops for federal engagement to develop AI standards.
- The European Union published its Ethics Guidelines for Trustworthy AI.
- General AI bills and resolutions were rolled out across 17 states in the U.S. in 2022. The bills stipulated regular audits to ensure transparency, fairness, and accountability in decision-making systems.
Best Practices for Leveraging AI While Ensuring Data Privacy
Organizations and individuals can adopt AI best practices to safeguard against data privacy violations. Here are a few:
- Implement strong encryption standards. This is crucial when it comes to keeping customer data safe from hackers and other malicious third parties who might try accessing it through unconventional means.
- Use synthetically generated data to train algorithms, rather than using biased datasets that involve sensitive information.
- Implement robust AI governance on decision-making systems to track and audit the use of the algorithm.
- Ensure that all employees are trained on how to handle sensitive information.
The Future of AI and Data Privacy
AI is fast turning into an integral part of our daily lives. Alexa, Siri, and Google Home are just a few examples. In the health care section, it will soon be possible to inject life-saving drugs based on the test results analyzed by AI. Similarly, AI will have a major role to play in the provision or denial of opportunities such as housing, insurance, education, legal, and employment. Policymakers will need to enact new laws to prevent potential biases and limit unfair decision-making, and governments will need to define penalties to address transgressions. Legislators are already in motion:
- The AI Act 2023: The European Commission is working on the AI Act in a bid to regulate AI applications. Proposed to come into effect in 2023, it uses a risk-based approach: Under the AI Act, AI systems will be classified as unacceptable, high risk, limited risk, or minimal/no risk. Unacceptable systems considered dangerous to the public will have to be communicated explicitly before anyone interacts with them. Systems considered high risk will be allowed to operate with conditions, while limited-risk systems must meet transparency requirements, and minimal/no-risk systems will be allowed to operate without restrictions.
- Big Tech Regulation: Tech giants such as Google, Microsoft, and BMW are in the process of defining corporate AI policies with a focus on safety, security, unbiasedness, diversity, and privacy. Several companies, including BCG, Salesforce, and IBM, have already started to staff AI ethics officers. More roles will soon follow to keep a check on internal policies and ensure compliance with AI laws.
Takeaway
Despite its great potential, AI comes with its share of challenges for society. However, such scenarios aren’t entirely new. With any other advent of technology – such as media, motorized transport, and space tech – challenges naturally emerge. Each time, society has come together to define rules and safeguards around them. AI will follow a similar course. As new threats to data privacy and security are identified, the world will keep up with and devise legislation to mitigate them.
Image used under license from Shutterstock.com