- What Is Data Ethics, Data Privacy, and Data Security?
- Example of Not Very Ethical Data Usage
- Ethical Use of Data by AI from Two Perspectives
- Perspective 1: Ethical AI that Makes Customers Happy
- Perspective 2: Ethical AI that Mitigates Company’s Risks
- New Ways to Personalize Offers
- Code of Ethics Is Your Competitive Advantage
- Ethical AI Is a Result of Teamwork
- Final Thoughts
What Is Ethical AI and How to Survive in Increased Privacy Reality
April 8, 2024 5 min. read
Contents:
Try AI-Driven Insights
Monitoring for Free
Discover new business ideas and growth opportunities using
our AI-powered insights monitoring tool
You get a personalized website experience in exchange for personal information. Well, collecting your data for clearly declared purposes is okay as soon as its privacy and security are ensured. But what if a brand approaches your sensitive information with an intent you never agreed to or wants to get profit from biased data-driven insights?
Data ethics is the moral behind using data, but can businesses survive with their AI algorithms being ethical, especially in an increased privacy world?
What Is Data Ethics, Data Privacy, and Data Security?
Data Ethics, just like ethics in life, is about protecting personal information and using it morally. Unethical actions are collecting the data that the user doesn’t want to share or making publicly available any of his PII (Publicly Identifiable Information). PII includes any data that is tailored to a client’s identity: his name, birth date, address, phone, email, SSN (Social Security Number), passport number, credit card, IP, etc.
Once businesses collect PII, they should ensure its protection. Data protection, in its turn, falls into two subcategories: data privacy and data security. Data privacy (or information privacy) includes a legal and ethical obligation to keep sensitive information inaccessible to third parties. While data security technically prevents unwanted access to your clients’ database. So, when you store the PII and use it legally within your company — you ensure data privacy. And when you don’t let strangers use that data — it’s data security.
This way, Data Ethics is a code of moral behavior of Artificial Intelligence towards human personal information. Using sensitive data for the right purposes — without any bias or bad intention — makes AI an ethical instrument and not a soulless machine. And it’s our responsibility to build the bridge between formal mathematical models and our morals. Let us share an example of how AI insights were used for aggressive sales and interfered with the private life of a teenager.
Example of Not Very Ethical Data Usage
One of the brightest examples of unethical AI algorithms was probably the Target case that happened around 10 years ago. A father of a high-school daughter found coupons for baby clothes in his mailbox sent by Target. He was furious and blamed the shop for encouraging his daughter to get pregnant. But what happened?
The giant retailer analyzed the purchases of hundreds of customers. And its algorithms accounted for multiple factors to detect women who were pregnant at early stages. These data mining techniques allowed Target to calculate even the delivery period! So, after the girl visited Target, the company leveraged the accumulated information. It promoted its products without caring too much about the moral part of the case. Why? Because of the poorly ethical AI that didn’t consider the lady’s age, and no one expected that her father didn’t know about this pregnancy. So, wanting to anticipate the girl’s expectations, offer help, and make sales, they revealed a secret that resulted in a family issue.
Ethical Use of Data by AI from Two Perspectives
Perspective 1: Ethical AI that Makes Customers Happy
Ideally, all companies should follow a universal procedure for storing and using their customers’ personal data. Then we would know what to expect each time we submit our contact info. Clients are ready to provide the PII (Personal Identification Information) to get personalized offers without compromising sensitive data. Sharing is normal:
1. When you know that your personal information stays confidential. In other words, you treat your data as a secret between you and the company, and that data should never be exposed to someone else.
2. When you know how your data will be used. This means that before completing a website form, you need to understand all possible ways that the company will use your data. Brands should be transparent and open with their customers and let users know if their personal data will be sold for other purposes. This will help clients control their sensitive information.
Perspective 2: Ethical AI that Mitigates Company’s Risks
Transparent collection and secure storage of the information are not major challenges for a brand. It’s also essential to propose a consumer ethical choice based on an analysis of his data. Every time you get a data-driven insight from AI and want to use it for personalization, think if it’s legal to disclose the mined information and if you have permission to do it. And, if these risks are covered, consider the bias: does this offer or action create any injustice — economic or social?
However, PII-based personalization became outdated when Google announced its decision to preserve customers’ anonymity — restricting support of third-party cookies in Chrome. Does it mean that the era of personalization ended and brands can no longer customize their offers? No. But businesses will have to switch to alternative methods of making recommendations. Here are some of them.
New Ways to Personalize Offers
If you can’t ask users for their preferences in the face, do that indirectly. For example, ask for a favorite shoe model or brand and provide shoppers with answer-based instant recommendations. Such real-time recommendations last during one web session, but they work. Or be open and try to get clients’ PII in exchange for some awards: invite them to play games, arrange promotions and giveaways, or let them take part in contests. Inbounding techniques are still a decent way to attract leads and even instantly convert them into customers. But for this, you should provide users with an excellent browsing experience, clear CTA, and an uninterrupted shift from searching to buying.
How AI Becomes Unethical
AI algorithms themselves are neutral, but they turn out to be biased only when people teach them so. Machines generate insights and patterns from the data they’re trained by — in Data Science projects. That’s why ethical manual preparation of datasets before “feeding” them to machines should be essential for data scientists. Correct labeling and categorization of pictures, behavioral patterns, recordings, etc., is the key to avoid any moral issues resulting from processing data by AI.
The Airbnb case is an example of hardly ethical data usage. The company decided to eliminate social inequality and align Black and White hosts’ revenues by introducing an algorithm that recommends raising rates. Due to various factors, Black hosts earned about $12 less, so the AI offered to add $5 to current rates of White hosts and $13 — of Black ones. All hosts were supposed to grow revenues, and at first, it worked — the economic gap decreased by 71,3%. But 41% of Black hosts refused to use the algorithm, and racial and economic disparity increased to greater values than before.
Code of Ethics Is Your Competitive Advantage
Since businesses continuously collect, analyze, and use customers’ data, being transparent on how their AI uses it is not only a good idea. In 2024, it’s rather a must-have approach to ethical data usage that will help you gain the trust of your clients. Remember how often bots show you ads that only irritate you? And what if machine algorithms suddenly do the unintended coming out for you? the ethics program needs to be transparent, industry-relevant, and compliant to current data privacy regulations.
A good AI ethics code should be in line with existing GDPR (General Data Protection Regulation) and CCPA (California Consumer Privacy Act), FEPRA (Family Educational Rights and Privacy Act), and HIPAA (Health Insurance Portability and Accountability Act). These government documents cover data ethic rules, so following their recommendations will help you stay data-ethical. However, there also are privately-owned companies that develop guidelines. For example, Blumberg and two other companies worked out an ethical data usage guide for data scientists Community Principles on Ethical Data Sharing (CPEDS)
Ethical AI Is a Result of Teamwork
Your AI-driven product will be ethical only when every team member clearly understands the reasons and, ideally, shares the values behind the ethical use of data. You can also reward employees for the identification of ethical risks. In large organizations, an Ethics Committee could be the best solution, while in startups and SMBs, one person can help control and coordinate all issues related to data ethics. So, building awareness or even nurturing a culture in your company will help create an AI-ethics-friendly environment. In it, each member will feel a personal responsibility for keeping the ML algorithms and end product moral.
Final Thoughts
Probably, it’s too early to talk about the Rise of the Machines because super-intelligent AI algorithms and smart apps still need an ethical human touch. However, this touch isn’t only the responsibility of data scientists and annotators who label and pre-process big data. It’s a company-wide strategy that should coordinate every action related to data.
We hope your GPS won’t take you to the street with its sponsor’s shops but will choose the optimal route next time you drive home.
More useful content on our social media: