3 Major Ethical Challenges in Our Data-driven World

Some years back, Google’s ad results once falsely alleged that Latanya Sweeney, a computer scientist and the director of the data privacy lab at Harvard, was arrested. However, the background check company debunked the false accusation. Afterward, Sweeney conducted research by searching more than 2,100 names of real people on Google. She found out that 25 percent of names associated with blacks were tagged with a criminal record advertisement. This result shows racial bias in the prediction algorithm.

Give it a thought. How will you feel when a biased algorithm deprives you of an opportunity you are best fit for?

Without big data analytics, companies are blind and deaf, wandering out onto the web like deer on a freeway. Geoffrey Moore

Data is the most valuable asset in the world. From drug development in healthcare to self-driving vehicles in transportation to advanced fraud detection in business, the best-running industries today are driving on data. However, we must uphold proper data ethics to ensure the data revolution remains a force for good.

What is Data Ethics?

Data ethics are moral principles that focus on data collection, protection, and usage. Companies that care for customers’ data privacy tend to gain their trust: more customer trust, more company sales, and higher revenue. Unethical practice, however, results in a lack of trust and reputational damage to the company.

The Data Dilemmas

Data are associated with multiple ethical concerns. You should be mindful of whom you share your data with. Some of the ethical challenges include:

  • Informed Consent

One of the ways companies and researchers gather data is through informed consent. They disclose adequate details to you regarding the research or project, including its objectives and the intended use of the data, which you may accept or deny.

But things go wrong sometimes. The company or researcher may not explicitly inform you of their intent. For instance, between 1932 and 1972, the US Public Health Service conducted the Tuskegee untreated Syphilis study. The experiment aimed to examine the effects of untreated syphilis among African-American men without their consent. Out of the 600 black men, the scientist allowed 399 with syphilis to suffer the disease without treatment, even after penicillin was discovered in 1948 as a cure for syphilis. At the end of the forty years of human experimentation, only 74 out of the 399 men survived.

  • Data Insecurity

Data insecurity is another ethical failure. When there is a data breach — a situation where unauthorized parties steal confidential information from a database or network– individuals’ privacy and businesses are at risk. Issues like identity and financial theft, corrupt databases, and emotional distress come in handy with data insecurity.

In mid-2022, Alibaba, the Chinese e-commerce giant, suffered a massive data breach. An anonymous hacker called ChinaDan stole 23 terabytes of data — about 1.1 billion users– from the company’s cloud database. Likewise, LinkedIn lost about 700 million users’ data to hackers in June 2021. Sensitive data like names, email addresses, ID numbers, and phone numbers of users later appeared on dark web forums for sale. Industries should prevent data breaches by taking security measures such as Zero-Trust Architecture (ZTA) to limit data access, encrypting data with advanced encryption standards, and destroying any data before disposal.

  • Algorithm Bias

Discriminatory models might have influenced your job rejection, unknowingly. Unlike data breaches, algorithm bias is hard to prove. It’s a systematic error that causes unfairness in an algorithm’s output. This bias results from training biased data, leading the machine learning model to be biased and later a biased evaluation.

Amazon, for instance, scrapped its AI recruiting tool due to gender discrimination. After training the model with over ten years of resume data, it recognizes the male dominance across the tech industry. For this, the model prioritizes male applicants over females, especially those from all-women’s colleges. Thus, algorithmic transparency and accountability laws are one of the ways to prevent or reduce unfair biases.

Thanks for coming this far!

Stay curious, stay persistent, and enjoy the satisfaction of solving problems. Happy learning!

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top