The Ethical Dilemma in Generative AI: Survey Results from 500 Businesses
Manu Jain | October 30, 2024 , 14 min read
Table Of Content
The ability of Generative AI to create novel content is set to change many industries.
In spite of that, its fast progress has also raised important AI ethical dilemmas. These worrying potential unethical uses of AI led to a survey of 500 top companies by Solita and IRO Research.
The survey offers plenty of valuable insights to explore. In this blog, we’ll look at the main findings from the survey and what they mean for the future of AI.
What Are the Ethical Dilemmas in Artificial Intelligence?
AI brings up important ethical questions. It pushes us to make sure that AI benefits society while reducing potential harm. Here are some of the key ethical concerns:
1. Bias and Fairness
It is common for AI systems to reflect biases. These biases result from the data on which they were trained, which often leads to discriminatory outcomes.
A notable example of AI bias was during the COVID-19 pandemic when a soap dispenser was criticized as being “racist” because it only dispensed soap for certain skin tones.
Automatic soap dispensers work by detecting light refracted off a surface placed beneath them.
Since darker skin absorbs more light and reflects less, dispensers trained primarily with lighter-skinned hands often failed to recognize the presence of a hand. Still, we must note that these biases are not intentional; they are simply difficult to detect until they appear in practice.
2. Privacy and Surveillance
The efficiency of AI systems heavily relies on data, and most often, the data used to train these systems is personal, which raises concerns about Privacy Violations. Humans have historically been able to exercise a high degree of control over data processing.
Nevertheless, the increased use of AI means this may no longer be the case because of the application of AI to existing technologies.
This stands to alter the current use and privacy considerations. We can look to the use of CCTV cameras in public spaces for surveillance which is a relatively widespread practice now, as a good example.
3. Taking responsibility and being open about actions
AI systems are often perceived as perfect machines incapable of making mistakes. This notion is untrue because they can make mistakes.
For example, they can provide incorrect medical diagnoses or cause accidents with self-driving cars.
When these errors occur, figuring out who is responsible may prove to be a challenge because many AI systems operate like “black boxes”.
This means their decision-making process is often unclear, making it challenging to explain how they arrived at a particular outcome or who should be blamed for the mistake.
4. Job Displacement
AI can speed up work and make it more efficient, but it might also replace jobs, especially where tasks are repetitive.
To handle this change better, we need to balance new technology with programs that help workers learn new skills and find other jobs. This way, businesses can grow while still supporting their employees.
5. AI and Warfare
Technological developments have always given rise to new methods and means of warfare, such as cyber-attacks, armed drones, and robots. AI, too, has the potential to be used in military operations to control weapons autonomously.
This carries significant risks. Without careful oversight, it could lead to unintended conflicts. Establishing ethical guidelines to prevent the misuse of AI in warfare should not be overlooked.
Generative AI in Business: Insights from Survey of the Top 500 Companies
In this section, we highlight key points on how the top 500 companies surveyed are responding to generative AI.
Are they embracing it and seeing growth in productivity, or are they approaching it carefully?
Why is AI controversial? These statistics will show you how these leading companies view the potential ethical dilemmas of AI, along with the opportunities and risks it brings.
1. Job Displacement
Generative AI is expected to greatly increase business productivity. However, this good news comes at a cost. Goldman Sachs estimated that GenAI could replace up to 25% of current jobs in Western countries.
This is a major AI ethical dilemma because in the end, who benefits and who loses? What are the moral responsibilities of businesses, governments, and society in balancing technological progress with human welfare? This is an economic gain for businesses but at the expense of many workers.
AI in this case would be an economic inequality and will the benefits of AI be fairly distributed? Is it ethical for companies to prioritize profit and efficiency if it results in mass unemployment or increased inequality?
2. Ethical Challenges in AI Is A General Concern
Identifying ethical issues with AI is a challenge for everyone, whichever their department. Be it IT, business, or senior management. The level of a company’s experience with AI doesn’t seem to change how people view these ethical challenges.
In fact, only about one in three large companies that use or plan to use generative AI offer training for their employees about the risks and downsides of this technology. When employees and decision-makers are not properly trained on the risks and downsides of AI, they may misuse the technology which would lead to unintended harm (e.g., privacy violations or biased outcomes).
Is it ethical for companies to adopt powerful technologies like AI without ensuring users are adequately informed about its potential risks? Business leaders have an obligation to foresee potential ethical pitfalls and prepare their teams accordingly.
If employees are not trained, they may make decisions that have negative societal consequences for which no one takes responsibility.
3. How are Businesses Balancing the Risks and Opportunities of AI
Company leaders see both risks and opportunities in using generative AI. The main concerns are security and privacy risks, lack of skilled workers, misuse of AI, and poor data quality. In fact, 74% of respondents believe that Generative AI could pose security and privacy risks to their business.
However, some also view it as a chance to improve their company’s data security. While AI offers opportunities to enhance business efficiency and data security, it also introduces risks like data breaches, privacy violations, and security vulnerabilities.
The dilemma lies in how much risk a company is willing to accept to reap the rewards of AI.
How do companies ensure AI isn’t misused, especially when the competitive market encourages fast adoption? Should they delay innovation to prioritize security and responsible use?
4. Underestimating Ethical Risks of Generative AI
A surprising finding from the research is that many executives do not see ethical issues or risks of discrimination related to its use. Only 10% of the surveyed executives recognized discrimination as a risk, and over half believed there are no ethical problems with generative AI.
This is a worrying concern because if business leaders are unaware of AI’s potential to cause harm, such as privacy violations or bias, they might unintentionally allow unethical practices to occur. Lack of awareness doesn’t absolve responsibility but makes it harder to implement safeguards.
Experts around the world highlight that there are potential ethical risks in this area that should be considered.
5. Positive Emotions Drive Leaders’ Interest in Generative AI
The survey looked at how business leaders feel about generative AI. Most expressed curiosity and enthusiasm, with fewer reporting negative emotions like stress or cynicism.
Only 5% said they felt indifferent, while 65% were curious, 39% enthusiastic, and 35% hopeful. Leaders already experimenting with AI showed more positive emotions compared to those taking a cautious approach.
This is another worrying concern because leaders’ positive emotions, like curiosity, enthusiasm, and hope, can drive innovation and progress.
Regardless, these emotions also create risks of overlooking ethical considerations, resulting in a situation where business decisions are driven by optimism rather than careful, responsible evaluation.
6. Concerns About Skils, Data Risks, and Costs in AI Adoption
In the top 500 companies, skills and resources are considered nearly as significant a risk as security challenges. About 70% of managers in large Finnish companies believe that the data they have could also pose a risk when using AI.
For IT management, the main worry is the cost associated with implementing and maintaining AI technologies. The high cost of AI implementation forces companies to weigh profits against ethical responsibility.
When budgets are tight, companies may be tempted to cut corners on critical aspects like transparency, data governance, or employee training on AI ethics. This creates a trade-off between doing the right thing and maintaining financial efficiency.
7. IT Management’s Cautious Approach to Generative AI Adoption
The way people feel about using AI technology depends on their job roles. Interestingly, those in IT management are more hesitant about generative AI compared to senior management and business managers.
The survey found that only 25% of IT managers are currently trying out generative AI, and about 30% use it occasionally in their work. While senior and business managers may be excited about the potential of AI, IT managers who often understand the technical risks and limitations are more hesitant.
This creates an ethical tension between pushing for innovation versus ensuring responsible deployment. Premature adoption without a thorough understanding of risks can lead to security breaches, unreliable AI outputs, or system failures.
8. Regulatory Compliance
More than half of the large companies surveyed are getting ready to start using generative AI. Although these companies are careful about adopting this technology, they worry about staying competitive in a market where others might already be using AI to improve their productivity.
Ignoring the potential of generative AI could put their market position at risk. However, it’s also important for them to use AI responsibly. This concern is growing as new laws, like the EU AI Act, are set to be introduced in the coming years.
The introduction of new AI laws means companies must navigate uncertain regulatory frameworks. Waiting for clarity on these regulations could slow innovation, but jumping ahead without understanding future laws could result in legal and ethical violations.
9. Slow Adoption of Generative AI Among Finnish Companies
Large Finnish companies are beginning to experiment with generative AI, but they are not moving as quickly as companies in other countries. Many businesses are cautious and do not feel pressured to be the first to adopt these technologies.
This cautiousness is based on the fact that about 70% of managers said they do not currently use generative AI, even occasionally, to assist in their work. The delay in adopting generative AI means that Finnish companies risk falling behind their competitors, who are more willing to experiment with new technologies.
This delay could lead to missed opportunities for growth and improved efficiency, which raises the question of whether they are acting ethically by not fully leveraging available advancements for their stakeholders. Is it ethical to remain stagnant and not adopt beneficial technologies that could enhance productivity and competitiveness?
10. Environmental and Labor Ethics in AI Supply Chains
Another ethical dilemma the report pointed out is the AI supply chain ethical concerns. It reports that large language models are trained with significant underpaid human labor.
We could consider content moderation performed under poor working conditions, for example.
Awareness of these ethical concerns was limited, as companies rarely consider these broader supply chain impacts when deploying AI.
11. Over Reliance on Vendor Expertise for Ethical AI Practices
Many companies rely on external AI vendors for implementing generative AI solutions if we assume these vendors handle ethical concerns adequately.
However, this dependency often means companies lack direct oversight of AI ethics, which makes them vulnerable to risks embedded in third-party AI models.
12. Concerns About Loss of Human Judgment
While generative AI is seen as a tool to enhance productivity, there is some worry among respondents that excessive reliance on AI could erode human judgement in decision-making.
This concern underscores the need to balance AI efficiency with human oversight, particularly in sensitive areas like healthcare and finance.
AI’s Impact on Employment and Inequality
Let’s examine the impact of AI on employment and inequality:
Automation and Job Displacement
As AI systems become capable of performing tasks traditionally done by humans, many jobs risk obsolescence.
- Scope of Displacement: Industries such as manufacturing, customer service, and data entry may see significant job losses. While some new jobs will emerge, the transition may not be smooth for all workers.
- Economic Inequality: Workers in lower-skilled positions are often the most affected, while those with advanced technical skills may benefit from new opportunities in AI development and management. This widening gap could lead to increased social stratification, where the benefits of AI innovation are not equitably shared.
- Need for Reskilling: Reskilling and upskilling programs are necessary to mitigate the impact of job displacement. Workers must be equipped with new skills to adapt to changing job demands.
- Future Workforce: The future of work will likely involve collaboration between humans and AI, where technology enhances human capabilities rather than completely replacing them. Emphasizing skills such as creativity, critical thinking, and emotional intelligence will be essential in preparing the workforce for this new paradigm.
Generative AI: New Ethical Concerns
The quick advancement of AI has also raised significant concerns that businesses and society must address. Here are a few of those:
1. Misinformation and Deepfakes
Generative AI is so advanced that it is capable of producing convincing fake news, videos, and images. These manipulated videos are so good that they depict people saying or doing things they never did, which is very convincing to believe.
These are known as deepfakes. They pose risks to public trust, politics, and reputations. Combating misinformation will require detection tools and clear ethical guidelines.
2. Copyright and Intellectual Property
Before an AI model can generate content, it needs to draw from large datasets of existing material. These AI models are trained using others’ materials, like articles available on the internet without permission.
This raises concerns about copyright infringement. Artists, writers, and photographers worry that their work will be used without permission or compensation, complicating intellectual property laws.
3. Loss of Authenticity
AI-generated content is as good as human-made content. At a point, it becomes indistinguishable from human-made content, making it harder to verify what is real.
This questions authenticity and trust, especially in journalism, education, and entertainment. Users may not be able to distinguish AI-generated articles, artworks, or videos from authentic creations.
4. Job Disruption in Creative Fields
Generative AI is transforming the advertising, content creation, and design industry by producing high-quality outputs.
While it’s efficient, it threatens job security for creatives, such as graphic designers and writers, who now compete with AI-powered tools.
Conclusion
The survey shows that businesses face tough AI ethical dilemmas, such as protecting data privacy and avoiding biased algorithms. While AI has enormous potential, using it responsibly requires thoughtful planning.
Companies need to focus on ethical values, build strong management systems, and be transparent about how they use AI. This way, they can enjoy the benefits of AI while keeping risks under control.
Frequently Asked Questions
Q: What are some ethical dilemmas in AI?
AI presents dilemmas around bias, privacy, job displacement, accountability, and transparency. These issues challenge the balance between innovation and responsible use, affecting society on many levels.
Q: What is an example of unethical AI?
An example of unethical AI is facial recognition technology used without consent, which can violate privacy and lead to discriminatory practices, such as wrongful arrests due to biased algorithms.
Q: What are the biggest ethical challenges in AI today?
Key challenges include managing bias, protecting data privacy, ensuring accountability, mitigating job losses, and preventing the misuse of AI for surveillance or autonomous weapons.
Q: How can AI bias be minimized?
AI bias can be reduced by using diverse datasets, testing algorithms for fairness, involving multidisciplinary teams, and regularly auditing AI systems to detect unintended bias.
Q: Should AI be regulated more strictly?
Yes, stricter regulations are necessary to ensure AI is used responsibly, protect individual rights, and prevent harmful outcomes. Clear policies will help foster trust and ethical innovation in AI development.
References:
Related Blogs
12 Top Generative AI Trends for 2025
Looking for the next big thing in AI? Discover the top 12 generative AI trends that can revolutionize your business in 2025.
Manu Jain
Nov 29 ,
12 min read
Machine Learning for Business Intelligence: A Game-Changer for Data Insights
Discover how Machine Learning is revolutionizing Business Intelligence. Explore key benefits, use cases, and future trends in ML-powered BI.
ScaleupAlly Team
Nov 27 ,
10 min read
7 Limitations of AI You Need to Know Before Investing in it
Learn the key limitations of AI for businesses. Discover why AI can't fully replace human intelligence and how businesses can mitigate these challenges.
ScaleupAlly Team
Nov 24 ,
10 min read