I Can’t Stop Thinking About This Stuff
In this article Navigating the Ethical Frontier: The Promise and Risks of AI we will look at the positive and the negative side of ai
Okay, so I’m sitting here at 2 AM again, scrolling through Twitter, and I see this thread about AI replacing teachers. Some guy is celebrating because “finally, kids won’t have to deal with biased human teachers anymore.” And I’m just… staring at my screen, feeling this weird mix of anger and sadness.
Because here’s the thing – I’ve been thinking about AI ethics for months now, and it’s honestly driving me a little crazy. Not in a bad way, but in that way where you suddenly see something everywhere and can’t unsee it. Like when you learn a new word and then hear it five times in one day.
Last week, I was helping my niece with her college application essay. She’s applying to art school, super talented, been drawing since she could hold a crayon. Halfway through our conversation, she casually mentions that she used AI to “polish” her personal statement. Just to make it sound better, she said.
I didn’t know how to react. Part of me wanted to lecture her about authenticity. But another part of me thought – isn’t this just… evolution? We use spell check. We use grammar tools. Where exactly is the line?
That’s when it hit me: we’re all just winging it here. None of us really know what we’re doing with this technology.
The Good Stuff That Makes Me Hopeful
But let’s back up. Because despite my 2 AM anxiety spirals, there’s genuinely incredible stuff happening that makes me feel optimistic about humans.
My neighbor Maria has Parkinson’s. Her handwriting, which used to be this beautiful cursive, started getting shaky and hard to read. She was getting frustrated, feeling disconnected from one of the few ways she could still express herself clearly. Then her daughter introduced her to voice-to-text AI that actually understands her speech patterns, even when the Parkinson’s affects her pronunciation.
Now Maria writes poetry again. Not types – writes. The AI helps her get her thoughts down, and then she goes back and edits by hand, making it hers. She showed me a poem last week about growing old gracefully, and I’m not ashamed to say I teared up a little.
Or take my friend David, who’s been trying to learn Spanish for years. Traditional classes never clicked for him – too rigid, too embarrassing when he made mistakes. But he found this AI language partner that lets him practice conversations without judgment. He can mess up a thousand times, and it just gently corrects him and keeps going. Six months later, he’s planning a trip to Mexico and actually feels confident about communicating.
These aren’t Silicon Valley success stories. These are real people in my actual life finding ways to use AI that enhance their humanity instead of replacing it.
The medical stuff blows my mind too. My cousin works in radiology, and she told me about this AI that helps spot early-stage breast cancer in mammograms. It’s not replacing her – she’s still the one making the final call. But it’s like having a really, really good second pair of eyes. She said it’s already helped her catch two cases she might have missed.
But Then Reality Hits Different
Here’s where my optimism starts getting complicated, though.
I teach part-time at a local community college – nothing fancy, just helping people prepare for civil service exams. Last semester, I started noticing something weird in the essays students were turning in. They were… too good. Not obviously plagiarized, but they had this generic perfection that felt off.
After some detective work, I figured out that about half my class was using AI to write their practice essays. Not cheating, exactly – they weren’t trying to hide it. They just genuinely didn’t see the problem.
When I brought it up, one student – let’s call him Marcus – got defensive. “Why should I struggle with writing when there’s a tool that can help me express my ideas better?” he asked. “Isn’t the point to communicate effectively?”
I opened my mouth to argue, then closed it. Because… he wasn’t wrong? But he also wasn’t right. There’s something important about the struggle of finding your own words, making your own mistakes, developing your own voice. But how do you explain that to someone who’s been told their whole life that efficiency is everything?
That conversation kept me up for weeks.
The Stuff That Actually Scares Me
You want to know what really freaks me out? It’s not the robot uprising scenarios. It’s the subtle stuff. The way we’re slowly handing over pieces of ourselves without really noticing.
I caught myself last month asking ChatGPT for advice about a fight I was having with my sister. Not because I thought it would give me better advice than a human friend – but because it was easier. No judgment, no follow-up questions, no need to explain the complicated family history.
But afterward, I felt… empty. Like I’d eaten fast food when I was craving a home-cooked meal. The AI gave me perfectly reasonable suggestions, but something was missing. The messy, imperfect, beautifully human process of talking through problems with someone who actually knows and cares about me.
And don’t get me started on deepfakes. I saw one of my governor “announcing” a policy that would have been political suicide. It was so convincing that I had to fact-check it three times. But here’s the scary part – by the time I verified it was fake, it had already been shared thousands of times.
We’re entering an era where seeing isn’t believing anymore. Where video evidence means nothing. Where your own eyes can lie to you. How do we navigate a world like that?
The Questions That Keep Me Up
Look, I’m not a philosopher or a tech expert. I’m just someone trying to figure out how to live ethically in a world that’s changing faster than I can keep up with.
But here are the questions that rattle around in my brain at night:
If AI can write better than most humans, what does that mean for human expression? Are we all going to become editors of our own thoughts? Is there still value in clumsy, imperfect, authentically human communication?
Who’s responsible when AI screws up? Last year, an AI system recommended insulin doses that could have killed diabetic patients. The company blamed faulty training data. The data providers blamed unclear specifications. Meanwhile, real people’s lives hung in the balance.
What happens to empathy in an AI world? I’ve seen kids having deeper conversations with chatbots than with their parents. Not because the AI is better at empathy – but because it never gets tired, never gets frustrated, never has a bad day. Is artificial patience better than authentic human messiness?
Are we solving problems or just hiding from them? AI can help us avoid awkward conversations, difficult decisions, uncomfortable truths. But maybe those awkward moments are where we actually grow as humans.
The Bias Problem Hits Different When It’s Personal
Here’s something that really got to me. My friend Jasmine, who’s Black, was job hunting last year. She kept getting rejected from positions she was overqualified for. On a whim, she used an AI tool to “optimize” her resume – and it suggested removing her involvement in Black student organizations and changing the name of her historically Black college to just “university.”
The AI wasn’t trying to be racist. It was just optimizing based on patterns in successful resumes. But those patterns reflected decades of systemic bias in hiring. The algorithm was essentially telling her to hide her Black identity to get hired.
When we talked about it, Jasmine said something that stuck with me: “It’s like the AI is holding up a mirror to show us exactly how messed up our systems already are. But instead of fixing the systems, we’re just teaching people to game them better.”
That’s when I realized that AI bias isn’t really an AI problem – it’s a human problem that AI makes impossible to ignore.
What I Think We Should Do (But I’m Still Figuring It Out)
I’m not going to pretend I have all the answers. Hell, I’m not sure anyone does. But here’s what I think we need to start doing:
We need to get comfortable with saying “I don’t know.” The pace of AI development is so fast that experts are constantly surprised by new capabilities. Instead of pretending we have everything figured out, maybe we should embrace uncertainty and make decisions carefully.
We need diverse voices in these conversations. Not just tech bros and academics, but teachers, artists, social workers, small business owners, retirees – people from all walks of life who will be affected by these technologies.
We need to teach AI literacy like we teach media literacy. Kids should understand how algorithms work, what training data means, why AI might be biased. Not because they’re going to become programmers, but because they’re going to live in a world shaped by AI.
We need to preserve spaces for authentic human messiness. Maybe some things shouldn’t be optimized. Maybe some conversations shouldn’t be efficient. Maybe some learning needs to be slow and frustrating and deeply personal.
The Thing That Gives Me Hope
You know what actually makes me optimistic? It’s conversations like the one I had with Marcus, my student who was using AI for essays. After our class discussion about AI and authenticity, he came up to me and said, “I never thought about it that way. Can you help me figure out when it’s okay to use AI and when I should struggle through it myself?”
That question – that curiosity about doing the right thing – that’s what gives me hope.
Because AI isn’t happening to us. It’s happening with us. Every time we choose to use it or not use it, every time we question its recommendations or trust its suggestions, every time we decide what aspects of our humanity we want to preserve – we’re shaping the future.
The Mirror Shows Us Who We Really Are
Here’s my half-formed, probably-wrong, definitely-incomplete theory: AI is like a funhouse mirror for humanity. It reflects back our intelligence, but also our biases. Our creativity, but also our laziness. Our desire to help people, but also our tendency to exclude and discriminate.
The scary thing isn’t that AI might become more human-like. It’s that we might become more AI-like – optimized, efficient, but somehow less authentically ourselves.
But maybe that’s okay. Maybe struggling with these questions, feeling uncomfortable about the answers, staying curious about the implications – maybe that’s the most human thing we can do.
I don’t know what the world will look like in ten years. I don’t know if my job will exist, or if my niece will become an artist in a world where machines can paint, or if we’ll find ways to use AI that amplify the best parts of being human instead of replacing them.
What I do know is that we’re all in this together, figuring it out as we go. And somehow, that feels both terrifying and reassuring at the same time.
Still thinking about all this stuff. Probably always will be.
