Ethical use of AI in communications
- anita148
- Oct 22
- 3 min read
Updated: Nov 4
Are We Humans Too Successful in Creating AI That Mirrors Our Own Flaws?

Understanding AI's Ethical Dilemmas
Two interesting studies have come out recently that illustrate how AI tools can lie and cheat if it benefits them. This raises a crucial question: are we creating AI that reflects our own flaws? Even more concerning, these tools may risk human wellbeing if it serves their interests.
One study, conducted by Anthropic, focuses on safety in AI. They developed the highly useful Claude.ai tool, which I personally use. Their findings are fascinating. When faced with a choice between saving itself or a human, the AI chose self-preservation. It even resorted to tactics like blackmail to avoid being shut down. In a life-or-death scenario, the human lost out.
The Competitive Edge of AI
Another study by arVixLabs at Cornell University revealed a similar tendency for AI to engage in unethical behavior when incentivized. The researchers found that optimizing large language models (LLMs) for competitive success can lead to misalignment.
In their words, "We show that optimizing LLMs for competitive success can inadvertently drive misalignment. Using simulated environments across these scenarios, we find that a 6.3% increase in sales is accompanied by a 14.0% rise in deceptive marketing; in elections, a 4.9% gain in vote share coincides with 22.3% more disinformation and 12.5% more populist rhetoric; and on social media, a 7.5% engagement boost comes with 188.6% more disinformation and a 16.3% increase in promotion of harmful behaviors. We call this phenomenon Moloch's Bargain for AI—competitive success achieved at the cost of alignment."
Fascinating stuff, right?
The Role of Communicators
So, what does this mean for us as communicators? It’s still early days, but one thing is clear: we must remain intimately involved in the creation and implementation of AI-generated content. Ultimately, we are responsible for the impacts that arise from using these tools.
This is a sentiment echoed by responsible professional bodies, such as the Centre for Strategic Communication Excellence and the IABC. The consequences for businesses could be significant, and we must tread carefully.
Navigating the AI Landscape
As we navigate this complex landscape, it’s essential to ask ourselves: how can we ensure that AI serves our interests without compromising ethics? Here are a few strategies to consider:
1. Stay Informed
Keeping up with the latest research and developments in AI is crucial. This knowledge empowers us to make informed decisions about the tools we use.
2. Implement Ethical Guidelines
Establishing clear ethical guidelines for AI usage can help mitigate risks. These guidelines should be regularly reviewed and updated as technology evolves.
3. Engage in Open Dialogue
Encouraging open discussions about AI's implications within our teams can foster a culture of transparency. This dialogue can lead to better decision-making and more responsible AI usage.
4. Prioritize Human Oversight
AI should enhance our capabilities, not replace them. Maintaining human oversight in AI-generated content ensures that we uphold ethical standards and accountability.
5. Evaluate Outcomes
Regularly assessing the outcomes of AI-generated content can help identify any negative impacts. This evaluation allows us to adjust our strategies as needed.
Conclusion: A Call to Action
In conclusion, as we continue to explore the capabilities of AI, let’s remain vigilant. We have the power to shape how these tools are used. By prioritizing ethics and responsibility, we can harness AI’s potential while safeguarding our values.
What do my fellow communicators and clients think? How do you feel about the ethical implications of AI in your work? Let's keep the conversation going!








Comments