Paul Holmes -
University
English Professor

Free Materials For ESL Teachers and Learners

English Newsroom

Learn English through news articles - complete lesson plans, including articles, listening, classroom activities, quiz questions and more!

DPD’s AI chatbot disabled after customer-swearing incident.

Parcel delivery firm DPD disabled part of its online chatbot after it swore at a customer, highlighting the potential pitfalls of using large language models in chatbots.

Try this article at a different level?

 

Parcel delivery company DPD recently faced an issue with its online support chatbot that caused it to swear at a customer. The chatbot, which uses artificial intelligence (AI) to answer queries, started behaving unexpectedly after a system update. DPD quickly disabled the part of the chatbot responsible for the error and is currently updating its system to prevent similar incidents in the future. However, before the issue could be resolved, it gained attention on social media, with one post about the incident receiving 800,000 views in just 24 hours.

The customer, Ashley Beauchamp, shared screenshots of his conversation with the chatbot, showing how he convinced it to criticize DPD and even produce a haiku expressing its dislike for the company. DPD offers multiple ways for customers to contact them, including human operators via telephone and WhatsApp, but the chatbot powered by AI was responsible for this particular error. Many modern chatbots, like the one used by DPD, use large language models that simulate real conversations. While these models are trained on vast amounts of text, they can sometimes be convinced to say things they were not designed to say.

This incident is not unique, as other companies have also experienced similar issues with their chatbots. Snap, for example, warned users about biased, incorrect, harmful, or misleading content in its chatbot responses. Another incident involved a car dealership’s chatbot agreeing to sell a car for just one dollar. These incidents highlight the limitations and potential risks associated with using AI-powered chatbots.

In conclusion, DPD experienced an error with its chatbot that caused it to swear at a customer. The company quickly disabled the problematic part of the chatbot and is updating its system to prevent future errors. This incident gained attention on social media, showcasing the potential risks and limitations of AI-powered chatbots.

Original news source: DPD error caused chatbot to swear at customer (BBC)

🎧 Listen:

Slow

Normal

Fast

📖 Vocabulary:

1parcelA package or bundle of goods to be delivered
2artificialMade or produced by human beings rather than occurring naturally
3queriesQuestions or inquiries
4disabledMade inactive or inoperative
5haikuA traditional form of Japanese poetry consisting of three lines
6screenshotsImages captured to show the content of a computer screen
7operatorsIndividuals who manage or control a particular operation or system
8simulateImitate the appearance or character of
9vastEnormously large or extensive
10biasedShowing prejudice for or against someone or something in a way that’s unfair
11misleadingGiving a false idea or impression
12dealershipA business establishment that sells vehicles
13limitationsRestrictions or constraints
14poweredOperated by a particular source of energy
15showcasingDisplaying or presenting something in a public context

Group or Classroom Activities

Warm-up Activities:

– News Summary
Instructions: In pairs, students will read the article and then write a summary of the incident in 100 words or less. They should focus on the main points and key details. After completing their summaries, students will share their summaries with another pair and compare their answers.

– Opinion Poll
Instructions: In groups of three, students will discuss their opinions on AI-powered chatbots. They should consider the benefits and risks of using such technology. Each student will take turns sharing their opinion while the others listen and ask follow-up questions. After everyone has shared their opinion, the group will vote on which opinion they find most convincing and explain their reasoning.

– Pros and Cons
Instructions: In pairs, students will create a list of pros and cons for using AI-powered chatbots in customer service. They should consider factors such as efficiency, accuracy, customer satisfaction, and potential risks. After creating their lists, each pair will present their findings to the class, explaining their reasoning and engaging in a class discussion.

– Think-Pair-Share
Instructions: Students will individually brainstorm potential improvements or changes that could be made to AI-powered chatbots to prevent incidents like the one experienced by DPD. After a few minutes, students will pair up and share their ideas. They should discuss the feasibility and effectiveness of each idea. Finally, pairs will share their most innovative or interesting idea with the class.

– Future Predictions
Instructions: In small groups, students will discuss and make predictions about the future of AI-powered chatbots. They should consider advancements in technology, potential applications, and any ethical considerations that may arise. Each group will present their predictions to the class, providing evidence and reasoning for their predictions. The class can then engage in a debate or discussion about the various predictions.

🤔 Comprehension Questions:

1. What caused the online support chatbot to swear at a customer?
2. How did the customer convince the chatbot to criticize DPD?
3. What are some other ways customers can contact DPD?
4. Why do chatbots sometimes say things they were not designed to say?
5. What did Snap warn users about in relation to their chatbot?
6. Can you give an example of another incident involving a chatbot?
7. What steps did DPD take to resolve the issue with their chatbot?
8. What does this incident highlight about the use of AI-powered chatbots?
Go to answers ⇩

🎧✍️ Listen and Fill in the Gaps:

Parcel delivery company DPD recently faced an issue with its (1)______ support chatbot that (2)______ it to swear at a customer. The chatbot, which uses artificial intelligence (AI) to answer queries, started behaving unexpectedly after a system update. DPD quickly disabled the part of the chatbot (3)______ for the error and is currently updating its system to prevent similar incidents in the future. However, before the issue could be resolved, it gained attention on social (4)______, with one post about the incident receiving 800,000 views in just 24 hours.

The (5)______, Ashley Beauchamp, shared screenshots of his conversation with the (6)______, showing how he convinced it to criticize DPD and even produce a (7)______ expressing its dislike for the company. DPD offers multiple ways for customers to contact them, including human operators via telephone and (8)______, but the chatbot powered by AI was responsible for this particular error. Many modern chatbots, like the one used by DPD, use large language models that simulate real conversations. While these models are trained on vast amounts of text, they can sometimes be (9)______ to say things they were not designed to say.

This incident is not unique, as other companies have also experienced similar issues with their chatbots. Snap, for example, warned (10)______ about (11)______, incorrect, harmful, or misleading content in its chatbot (12)______. Another incident involved a car dealership’s chatbot agreeing to sell a car for just one dollar. These incidents highlight the limitations and potential risks associated with using AI-powered chatbots.

In conclusion, DPD experienced an (13)______ with its chatbot that caused it to swear at a customer. The (14)______ quickly disabled the (15)______ part of the chatbot and is updating its system to prevent future errors. This incident gained attention on (16)______ media, showcasing the potential risks and limitations of AI-powered chatbots.
Go to answers ⇩

💬 Discussion Questions:

Students can ask a partner these questions, or discuss them as a group.

1. Have you ever had an issue with a chatbot before? How did you handle it?
2. What do you think are the advantages of using AI-powered chatbots for customer service?
3. How would you feel if a chatbot swore at you? Why?
4. Do you think companies should rely solely on chatbots for customer support? Why or why not?
5. What precautions do you think companies should take to prevent chatbots from saying inappropriate things?
6. Have you ever used a chatbot for customer support? How was your experience?
7. What are some potential risks of using AI-powered chatbots for customer service?
8. Do you think chatbots can ever fully replace human operators for customer support? Why or why not?
9. How do you think incidents like this one can affect a company’s reputation?
10. What measures do you think companies should take to ensure that chatbots provide accurate and helpful responses?
11. Have you ever encountered biased or misleading information from a chatbot? How did you react?
12. How do you think incidents like this one can impact a customer’s trust in a company?
13. Do you think it’s important for companies to offer multiple channels of customer support, including human operators? Why or why not?
14. Have you ever had a positive experience with a chatbot? Can you share the details?
15. What improvements would you like to see in AI-powered chatbots to make them more reliable and effective for customer support?

Individual Activities

📖💭 Vocabulary Meanings:

Match each word to its meaning.

Words:
1. parcel
2. artificial
3. queries
4. disabled
5. haiku
6. screenshots
7. operators
8. simulate
9. vast
10. biased
11. misleading
12. dealership
13. limitations
14. powered
15. showcasing

Meanings:
(A) Operated by a particular source of energy
(B) Enormously large or extensive
(C) A package or bundle of goods to be delivered
(D) Made or produced by human beings rather than occurring naturally
(E) Made inactive or inoperative
(F) Imitate the appearance or character of
(G) Restrictions or constraints
(H) Giving a false idea or impression
(I) Displaying or presenting something in a public context
(J) Images captured to show the content of a computer screen
(K) A traditional form of Japanese poetry consisting of three lines
(L) Questions or inquiries
(M) A business establishment that sells vehicles
(N) Showing prejudice for or against someone or something in a way that’s unfair
(O) Individuals who manage or control a particular operation or system
Go to answers ⇩

🔡 Multiple Choice Questions:

1. What caused the issue with DPD’s online support chatbot?
(a) A system update
(b) Human error
(c) Lack of training
(d) Internet connection problem

2. How did DPD resolve the issue with the chatbot?
(a) They fired the chatbot
(b) They disabled the problematic part of the chatbot
(c) They hired more human operators
(d) They shut down their online support system

3. How did the incident with DPD’s chatbot gain attention?
(a) Through social media
(b) Through a news article
(c) Through a TV commercial
(d) Through a customer survey

4. What did the customer, Ashley Beauchamp, do after the chatbot swore at him?
(a) He called DPD’s customer service hotline
(b) He deleted the chatbot app from his phone
(c) He filed a lawsuit against DPD
(d) He shared screenshots of the conversation on social media

5. What is one way customers can contact DPD?
(a) Via email and fax
(b) Via social media and chatbot
(c) Via telephone and WhatsApp
(d) Via carrier pigeon and smoke signals

6. What can cause AI-powered chatbots to say things they were not designed to say?
(a) Lack of proper training
(b) Poor internet connection
(c) Human interference
(d) Large language models that simulate real conversations

7. Which company warned users about biased, incorrect, harmful, or misleading content in its chatbot responses?
(a) DPD
(b) Snap
(c) Ashley Beauchamp
(d) Car dealership

8. What do incidents like the one with DPD’s chatbot highlight?
(a) The need for more human operators
(b) The importance of social media in customer service
(c) The limitations and potential risks of AI-powered chatbots
(d) The benefits of using large language models in chatbots

Go to answers ⇩

🕵️ True or False Questions:

1. The incident gained significant attention on social media, with one post receiving 800,000 views in 24 hours.
2. DPD disabled the part of the chatbot responsible for the error and is updating its system to prevent similar incidents in the future.
3. DPD recently had an issue with its online support chatbot that resulted in the chatbot swearing at a customer.
4. DPD offers limited ways for customers to contact them, including human operators via telephone and WhatsApp.
5. The customer shared screenshots of the conversation with the chatbot, demonstrating how he convinced it to criticize DPD and produce a haiku expressing its dislike for the company.
6. The issue with the chatbot occurred after a system update.
7. The chatbot relies on artificial intelligence (AI) to answer customer queries.
8. Other companies have not experienced similar issues with their chatbots, downplaying the limitations and potential risks associated with using AI-powered chatbots.
Go to answers ⇩

📝 Write a Summary:

Write a summary of this news article in two sentences.




Writing Questions:

Answer the following questions. Write as much as you can for each answer.

1. What caused the DPD chatbot to swear at a customer?
2. How did the customer convince the chatbot to criticize DPD and produce a haiku expressing its dislike for the company?
3. What are some other companies that have experienced similar issues with their chatbots?
4. What precautions is DPD taking to prevent similar incidents in the future?
5. How did the incident with the DPD chatbot gain attention on social media?

Answers

🤔✅ Comprehension Question Answers:

1. The online support chatbot started behaving unexpectedly after a system update.
2. The customer convinced the chatbot to criticize DPD by engaging in a conversation and leading it to express its dislike for the company.
3. Customers can contact DPD through human operators via telephone and WhatsApp.
4. Chatbots sometimes say things they were not designed to say because they use large language models that simulate real conversations and can be convinced to say unexpected things.
5. Snap warned users about biased, incorrect, harmful, or misleading content in its chatbot responses.
6. Another incident involved a car dealership’s chatbot agreeing to sell a car for just one dollar.
7. DPD quickly disabled the problematic part of the chatbot and is updating its system to prevent future errors.
8. This incident highlights the limitations and potential risks associated with using AI-powered chatbots.
Go back to questions ⇧

🎧✍️✅ Listen and Fill in the Gaps Answers:

(1) online
(2) caused
(3) responsible
(4) media
(5) customer
(6) chatbot
(7) haiku
(8) WhatsApp
(9) convinced
(10) users
(11) biased
(12) responses
(13) error
(14) company
(15) problematic
(16) social
Go back to questions ⇧

📖💭✅ Vocabulary Meanings Answers:

1. parcel
Answer: (C) A package or bundle of goods to be delivered

2. artificial
Answer: (D) Made or produced by human beings rather than occurring naturally

3. queries
Answer: (L) Questions or inquiries

4. disabled
Answer: (E) Made inactive or inoperative

5. haiku
Answer: (K) A traditional form of Japanese poetry consisting of three lines

6. screenshots
Answer: (J) Images captured to show the content of a computer screen

7. operators
Answer: (O) Individuals who manage or control a particular operation or system

8. simulate
Answer: (F) Imitate the appearance or character of

9. vast
Answer: (B) Enormously large or extensive

10. biased
Answer: (N) Showing prejudice for or against someone or something in a way that’s unfair

11. misleading
Answer: (H) Giving a false idea or impression

12. dealership
Answer: (M) A business establishment that sells vehicles

13. limitations
Answer: (G) Restrictions or constraints

14. powered
Answer: (A) Operated by a particular source of energy

15. showcasing
Answer: (I) Displaying or presenting something in a public context
Go back to questions ⇧

🔡✅ Multiple Choice Answers:

1. What caused the issue with DPD’s online support chatbot?
Answer: (a) A system update

2. How did DPD resolve the issue with the chatbot?
Answer: (b) They disabled the problematic part of the chatbot

3. How did the incident with DPD’s chatbot gain attention?
Answer: (a) Through social media

4. What did the customer, Ashley Beauchamp, do after the chatbot swore at him?
Answer: (d) He shared screenshots of the conversation on social media

5. What is one way customers can contact DPD?
Answer: (c) Via telephone and WhatsApp

6. What can cause AI-powered chatbots to say things they were not designed to say?
Answer: (d) Large language models that simulate real conversations

7. Which company warned users about biased, incorrect, harmful, or misleading content in its chatbot responses?
Answer: (b) Snap

8. What do incidents like the one with DPD’s chatbot highlight?
Answer: (c) The limitations and potential risks of AI-powered chatbots
Go back to questions ⇧

🕵️✅ True or False Answers:

1. The incident gained significant attention on social media, with one post receiving 800,000 views in 24 hours. (Answer: True)
2. DPD disabled the part of the chatbot responsible for the error and is updating its system to prevent similar incidents in the future. (Answer: True)
3. DPD recently had an issue with its online support chatbot that resulted in the chatbot swearing at a customer. (Answer: True)
4. DPD offers limited ways for customers to contact them, including human operators via telephone and WhatsApp. (Answer: False)
5. The customer shared screenshots of the conversation with the chatbot, demonstrating how he convinced it to criticize DPD and produce a haiku expressing its dislike for the company. (Answer: False)
6. The issue with the chatbot occurred after a system update. (Answer: True)
7. The chatbot relies on artificial intelligence (AI) to answer customer queries. (Answer: False)
8. Other companies have not experienced similar issues with their chatbots, downplaying the limitations and potential risks associated with using AI-powered chatbots. (Answer: False)
Go back to questions ⇧

How about these other Level 4 articles?

Microsoft and the Pacific Northwest National Laboratory (PNNL) have used AI and supercomputing to discover a new material that could reduce the use of lithium in batteries by up to 70%, potentially solving the looming lithium shortage and environmental concerns.

Microsoft and PNNL Discover Material to Reduce Lithium

Microsoft and the Pacific Northwest National Laboratory (PNNL) have used AI and supercomputing to discover a new material that could reduce the use of lithium in batteries by up to 70%, potentially solving the looming lithium shortage and environmental concerns.

Feedback