AI’s Impact on Your Life: 5 Insidious Ways You Didn’t Even Notice

You might have the wrong idea about artificial intelligence, and it’s not your fault. In the past few months, there have been a lot of stories about the claimed power and abilities of the technology.

These stories have ranged from sensational to downright silly. AI experts and pioneers don’t help when they sign open letters asking for a halt to AI research and warning of an impending extinction-level event.

AI is more than just chatting and making pictures. It’s also not Skynet saying it will go live and kill everyone. In fact, artificial intelligence isn’t very smart. Hiding the fact that it has been known to hallucinate facts and make bad choices in the past has let it cause real harm to humans.

Even though these harms are caused by a number of things, most of them can be traced back to the problem of bias, which has been going on for a long time. A lot of data is used to train AI bots like ChatGPT and the algorithms that suggest videos on YouTube. This information comes from people, many of whom are sadly biased, racist, and sexist.

For example, if you want to build a bot that decides who should graduate from a university, you might send demographic information about the kinds of people who have gotten degrees in the past. But if you did that, you’d probably end up with mostly white men and turn down a lot of people of color. This is because historically, colleges have turned down a lot more people of color than white people.

This is also not an exaggeration. We’ve seen this happen many times, each time in a different way. AI has been talked about a lot more in the past few months, but it has been affecting many parts of our lives for years. Before ChatGPT, AI programs were also used to figure out how the jobless made a living, if you got a place to live, and even what kind of health care you got.

This setting shows what this technology can and can’t do in a realistic way. Without it, you’re likely to believe AI hype, which can be very dangerous in and of itself. Misinformation and false claims about these bots come with the hype. There are many ways that this technology has become a part of our lives. Here are six of the most important ways that we’ve seen this happen.

Home Mortgages

If you want to buy a house, you’ll probably have to go through a set of steps called a formula. For example, your FICO credit score is based on a formula, and it’s a big part of whether or not you get a loan of any kind.

But it’s possible that you’ll also have to go through an AI approval process. Fannie Mae and Freddie Mac introduced automatic underwriting software in 1995. This software was supposed to make the process of approving or rejecting a home loan faster and more efficient by using AI to figure out how likely it is that a potential borrower will not pay back their loan.

Even though it was said that these systems would not care about color, the results were bad. The Markup released a report in 2021 that said mortgage lending algorithms in the U.S. were more likely to reject Black applicants by 80%, Asian and Pacific Islander applicants by 50%, Latino applicants by 40%, and Native American applicants by 70%, compared to similar white applicants.

In places like Chicago, where Black applicants were 150% more likely to be rejected than white applicants, and Waco, Texas, where Latino applicants were 200% more likely to be rejected, these numbers went up even more.

Jail and Prison Sentencing

When it comes to giving out punishments or being kind in a court of law, we think of judges and lawyers. In fact, a lot of that work is done with algorithms to figure out whether or not a defendant is likely to commit crimes again.

In 2016, ProPublica found that a popular AI was helping judges give much harsher sentences to black defendants at twice the rate of white defendants (45% vs. 23%). Also, white inmates were thought to be less likely to commit crimes again than they really were. This led to an inaccurate recidivism rate.

Even now, that same bot is used in places like New York, California, Florida, and Wisconsin to figure out how dangerous criminals are.

Job Hiring

As if looking for a job wasn’t frustrating enough, a racist HR bot might have to read your resume.

There are different kinds of bots that can be used to hire people. HireVue is a company that hires people for companies like Hilton and Unilever all over the country. It has software that studies the applicants’ facial expressions and voices. The AI then gives them a score and tells the company how they compare to the workers they already have.

There are also AI tools that analyze your resume quickly to find the right keywords. This means that you could be turned down before a real person in HR even looks at your cover letter. As with so many other AI apps, the result is that more applicants of color are turned down than similar white applicants.

Medical Diagnosis and Treatment

Hospitals and doctors’ offices have been using automated tools for a long time to help with diagnosis. In fact, places like the Mayo Clinic have been using AI for years to help find and detect problems like heart problems.

But when it comes to AI, bias always shows up, and medicine is no exception. A 2019 study released in Nature found that an algorithm used to manage health populations often led to Black patients getting worse care than similar white patients. Black neighborhoods and patients with the same amount of need also get less money.

With the rise of ChatGPT and other health tech startups trying to make diagnostic chatbots (to different degrees of cringeworthiness), many experts are worried that the harms we’ve already seen from chatbots could make these bias problems worse. This is also made worse by the dirty past of scientific racism in the medical field.

Recommendation Algorithms

The most obvious way that AI affects your daily life might be through social media algorithms, which is likely how you found this article in the first place. These AIs can show you your friend’s latest Instagram photo from their trip to Italy or your mom’s embarrassing Facebook status, but they can also promote radical content on YouTube or push a far-right agenda on Twitter.

In the past, bad players have found ways to use these algorithms to push their own political ideas. This happens all the time on Facebook, where huge troll farms in places like Albania and Nigeria spread false information to try to change elections.

At their best, these systems can help you find a fun new video on YouTube or Netflix to watch. At its worst, that film tries to make you think that vaccines are dangerous and that the 2020 election was stolen.

But that’s just how AI works. These technologies have a lot of potential to help people make choices more easily and quickly. But when AI is used as a weapon by bad people, abused by greedy companies, and sloppily applied to historically racist and biased social systems like incarceration, it does much more harm than good—and you don’t need an AI to tell you that.

Subscribe to Our Latest Newsletter

To Read Our Exclusive Content, Sign up Now.
$5/Monthly, $50/Yearly

RECENT POSTS

Kroy Biermann Net Worth With Full Biography in 2023

Are you interested in finding out Kroy Biermann Net...

How to Watch the Google Pixel 8 Event Live [Latest Updates]

Google is preparing to unveil a new Pixel product...

Updated Information of New Mexico Time Zone in 2023 [Travel Guide]

Navigating time zones can be challenging, particularly if you’re...

800 Area Code: The Toll-Free Way to Reach Your Close One [Detail Guide]

Are you curious about the 800 area code and...

Scientists Create New Map of Zealandia, the World’s Largest Submerged Continent

A Russian ship full with sailors and, bizarrely, penguins...

ASTROLOGY

LIFESTYLE

BUSINESS

TECHNOLOGY

HEALTH

FEATURED STORIES