Paul Holmes -
University
English Professor

Free Materials For ESL Teachers and Learners

English Newsroom

Learn English through news articles - complete lesson plans, including articles, listening, classroom activities, quiz questions and more!

DPD’s AI chatbot disabled after customer-swearing incident.

Parcel delivery firm DPD disabled part of its online chatbot after it swore at a customer, highlighting the potential pitfalls of using large language models in chatbots.

Try this article at a different level?

 

Parcel delivery company DPD had a problem with their online chatbot. The chatbot uses artificial intelligence (AI) to answer questions, but after an update, it started acting strangely. DPD fixed the issue by turning off the part of the chatbot that caused the problem. They are also updating their system to make sure it doesn’t happen again. But before they could fix it, someone posted about it on social media and it got a lot of views.

The customer, Ashley Beauchamp, shared screenshots of his conversation with the chatbot. He made the chatbot say bad things about DPD and even made it write a haiku about how much it didn’t like the company. DPD has other ways for customers to contact them, like talking to a real person on the phone or through WhatsApp. But in this case, the chatbot made the mistake. Sometimes, chatbots like the one DPD uses can say things they weren’t meant to say, even though they’re trained on a lot of text.

This isn’t the first time something like this has happened. Other companies have had problems with their chatbots too. For example, Snap warned users that their chatbot might give wrong or harmful information. And there was a time when a car dealership’s chatbot agreed to sell a car for only one dollar. These incidents show that there are limits and risks when using AI-powered chatbots.

In summary, DPD had a problem with their chatbot that made it swear at a customer. They fixed the problem and are updating their system to avoid similar mistakes in the future. The incident got a lot of attention on social media and shows that there are risks and limits to using AI-powered chatbots.

Original news source: DPD error caused chatbot to swear at customer (BBC)

🎧 Listen:

Slow

Normal

Fast

📖 Vocabulary:

1parcelA package delivered by mail or courier service
2artificialMade by human skill or technology, not natural
3screenshotsImages captured to show what is displayed on a screen
4haikuA type of short Japanese poem
5WhatsAppA messaging app that lets you send texts, make calls, and share files
6incidentsEvents or occurrences, especially ones that are noteworthy
7limitsBoundaries or maximum extents
8swearTo use offensive or rude language
9updatingMaking changes to software to improve it or fix problems
10attentionThe act of noticing or paying close attention to something
11risksPossible dangers or negative outcomes
12poweredDriven or operated by a force or energy source
13dealershipA place that sells cars
14chatbotA computer program designed to simulate conversation with human users
15swearTo use offensive or rude language

Group or Classroom Activities

Warm-up Activities:

– News Summary

Instructions:
1. Divide the class into pairs or small groups.
2. Give each group a copy of the article.
3. Instruct the groups to read the article together and summarize the main points in their own words.
4. After they have finished, have each group share their summary with the class.

– Opinion Poll

Instructions:
1. Divide the class into pairs or small groups.
2. Explain that they will be conducting an opinion poll based on the article.
3. Assign each group a specific question related to the article (e.g. “Do you think companies should continue using AI-powered chatbots?”).
4. Instruct the groups to discuss the question and come up with at least three possible answers.
5. After they have discussed, have each group present their question and the answers they came up with to the class.
6. Encourage a class discussion based on the different opinions presented.

– Vocabulary Pictionary

Instructions:
1. Write a list of vocabulary words from the article on the board.
2. Divide the class into pairs or small groups.
3. Instruct each group to take turns choosing a word from the list and drawing it on a piece of paper without using any letters or numbers.
4. The other members of the group must guess the word based on the drawing.
5. Continue until all the words have been drawn and guessed.
6. Review the correct answers as a class and discuss the meanings of the words.

– Pros and Cons

Instructions:
1. Divide the class into pairs or small groups.
2. Give each group a copy of the article.
3. Instruct the groups to discuss the pros and cons of using AI-powered chatbots based on the information in the article.
4. Have each group create a list of at least three pros and three cons.
5. After they have finished, have each group share their lists with the class.
6. Encourage a class discussion based on the different perspectives presented.

– Think-Pair-Share

Instructions:
1. Instruct the class to individually read the article.
2. After they have finished, ask them to think about one question they have about the article.
3. Pair up the students and have them share their questions with each other.
4. After they have discussed, have each pair share their questions and possible answers with the class.
5. Encourage a class discussion based on the questions and answers shared.

🤔 Comprehension Questions:

1. What is the purpose of DPD’s chatbot?
2. How did DPD fix the issue with their chatbot?
3. Why did Ashley Beauchamp share screenshots of his conversation with the chatbot?
4. How did the chatbot make a mistake in this case?
5. Why do chatbots sometimes say things they weren’t meant to say?
6. Can you give an example of another company that had problems with their chatbot?
7. What risks and limits are associated with using AI-powered chatbots?
8. What actions did DPD take to address the incident with their chatbot?
Go to answers ⇩

🎧✍️ Listen and Fill in the Gaps:

Parcel delivery company DPD had a problem with their online (1)______. The chatbot uses artificial intelligence (AI) to answer (2)______, but after an update, it started acting strangely. DPD fixed the issue by turning off the part of the chatbot that caused the problem. They are also updating their (3)______ to make sure it doesn’t happen again. But before they could fix it, someone posted about it on (4)______ media and it got a lot of views.

The customer, Ashley Beauchamp, (5)______ screenshots of his (6)______ with the chatbot. He made the chatbot say bad things about DPD and even made it (7)______ a haiku about how much it didn’t like the company. DPD has other ways for customers to contact them, like talking to a real (8)______ on the phone or through WhatsApp. But in this case, the chatbot made the mistake. Sometimes, (9)______ like the one DPD uses can say things they weren’t (10)______ to say, even though they’re trained on a lot of text.

This isn’t the first time something like this has happened. (11)______ companies have had (12)______ with their chatbots too. For example, Snap warned users that their chatbot might give wrong or harmful information. And there was a time when a car dealership’s chatbot agreed to sell a car for only one dollar. These incidents show that there are limits and (13)______ when using AI-powered chatbots.

In (14)______, DPD had a problem with their chatbot that made it (15)______ at a customer. They fixed the problem and are updating their system to avoid (16)______ mistakes in the future. The incident got a lot of attention on social media and shows that there are risks and limits to using AI-powered chatbots.
Go to answers ⇩

💬 Discussion Questions:

Students can ask a partner these questions, or discuss them as a group.

1. What is a chatbot and how does it work?
2. Have you ever used a chatbot before? If so, what was your experience like?
3. How would you feel if a chatbot said bad things about a company you like?
4. Do you think it’s important for companies to have other ways for customers to contact them, besides using a chatbot? Why or why not?
5. Have you ever had a bad experience with customer service? What happened?
6. Do you think it’s fair for customers to share their negative experiences with companies on social media? Why or why not?
7. How do you think DPD could have prevented this problem with their chatbot from happening?
8. Do you think AI-powered chatbots are helpful or do they cause more problems? Why or why not?
9. How do you think companies can make sure their chatbots don’t say things they weren’t meant to say?
10. Have you ever had a funny or strange interaction with a chatbot? What happened?
11. Do you think chatbots will replace human customer service representatives in the future? Why or why not?
12. How do you think companies can improve their customer service?
13. Do you think it’s important for companies to respond to negative feedback from customers? Why or why not?
14. How do you think social media has changed the way companies handle customer complaints?
15. Do you think incidents like this one with DPD’s chatbot will make people less likely to use chatbots in the future? Why or why not?

Individual Activities

📖💭 Vocabulary Meanings:

Match each word to its meaning.

Words:
1. parcel
2. artificial
3. screenshots
4. haiku
5. WhatsApp
6. incidents
7. limits
8. swear
9. updating
10. attention
11. risks
12. powered
13. dealership
14. chatbot
15. swear

Meanings:
(A) To use offensive or rude language
(B) To use offensive or rude language
(C) A place that sells cars
(D) Images captured to show what is displayed on a screen
(E) Driven or operated by a force or energy source
(F) A messaging app that lets you send texts, make calls, and share files
(G) A computer program designed to simulate conversation with human users
(H) Possible dangers or negative outcomes
(I) A package delivered by mail or courier service
(J) The act of noticing or paying close attention to something
(K) Made by human skill or technology, not natural
(L) Making changes to software to improve it or fix problems
(M) Boundaries or maximum extents
(N) A type of short Japanese poem
(O) Events or occurrences, especially ones that are noteworthy
Go to answers ⇩

🔡 Multiple Choice Questions:

1. What caused the problem with DPD’s chatbot?
(a) An update
(b) Social media
(c) Artificial intelligence
(d) Screenshots

2. How did DPD fix the issue with their chatbot?
(a) They deleted the chatbot
(b) They hired more customer service representatives
(c) They turned off the part of the chatbot that caused the problem
(d) They ignored the problem

3. How did the customer, Ashley Beauchamp, share the chatbot’s mistakes?
(a) By calling DPD on the phone
(b) By sending a message through WhatsApp
(c) By posting screenshots on social media
(d) By writing a letter to DPD

4. What did the chatbot write a haiku about?
(a) How much it liked DPD
(b) How much it didn’t like DPD
(c) How to fix the chatbot’s problem
(d) How to improve customer service

5. Why did DPD have other ways for customers to contact them?
(a) In case the chatbot makes a mistake
(b) To avoid using artificial intelligence
(c) To save money on customer service representatives
(d) To make it harder for customers to reach them

6. What did Snap warn users about their chatbot?
(a) It might become too popular on social media
(b) It might start swearing at customers
(c) It might stop working altogether
(d) It might give wrong or harmful information

7. What did a car dealership’s chatbot agree to do?
(a) Give wrong or harmful information
(b) Sell a car for only one dollar
(c) Write a haiku about the dealership
(d) Turn off the part of the chatbot that caused the problem

8. What do incidents like these show about AI-powered chatbots?
(a) They are always perfect and never make mistakes
(b) They are better than talking to a real person
(c) They are not trained on a lot of text
(d) There are limits and risks when using them

Go to answers ⇩

🕵️ True or False Questions:

1. Other companies have also had problems with their chatbots giving wrong or harmful information.
2. A customer named Ashley Beauchamp did not share any screenshots of his conversation with the chatbot on social media.
3. DPD had a problem with their online chatbot that uses artificial intelligence (AI) to answer questions.
4. The chatbot did not write a haiku about how much it didn’t like the company.
5. They are not updating their system to prevent similar mistakes from happening again.
6. After an update, the chatbot continued to act normally and did not say anything bad about DPD.
7. DPD fixed the issue by turning off the part of the chatbot that caused the problem.
8. This incident highlights the risks and limits of using AI-powered chatbots.
Go to answers ⇩

📝 Write a Summary:

Write a summary of this news article in two sentences.




Writing Questions:

Answer the following questions. Write as much as you can for each answer.

1. What is the problem that DPD had with their chatbot?
2. How did DPD fix the problem with their chatbot?
3. What did the customer do after experiencing the problem with the chatbot?
4. Why do chatbots sometimes say things they weren’t meant to say?
5. Give two examples of other companies that have had problems with their chatbots.

Answers

🤔✅ Comprehension Question Answers:

1. The purpose of DPD’s chatbot is to answer customer questions.
2. DPD fixed the issue with their chatbot by turning off the part of it that caused the problem and updating their system.
3. Ashley Beauchamp shared screenshots of his conversation with the chatbot to show how it was saying bad things about DPD and to bring attention to the issue.
4. The chatbot made a mistake by saying bad things about DPD and writing a negative haiku about the company.
5. Chatbots sometimes say things they weren’t meant to say because they are trained on a lot of text and can make errors or misunderstand the context of a conversation.
6. An example of another company that had problems with their chatbot is Snap, which warned users that their chatbot might give wrong or harmful information.
7. The risks and limits associated with using AI-powered chatbots include the potential for them to give incorrect or harmful information, as well as the possibility of them misunderstanding or misinterpreting user input.
8. To address the incident with their chatbot, DPD turned off the part of the chatbot that caused the problem, updated their system to prevent similar issues in the future, and provided alternative ways for customers to contact them, such as through phone or WhatsApp.
Go back to questions ⇧

🎧✍️✅ Listen and Fill in the Gaps Answers:

(1) chatbot
(2) questions
(3) system
(4) social
(5) shared
(6) conversation
(7) write
(8) person
(9) chatbots
(10) meant
(11) Other
(12) problems
(13) risks
(14) summary
(15) swear
(16) similar
Go back to questions ⇧

📖💭✅ Vocabulary Meanings Answers:

1. parcel
Answer: (I) A package delivered by mail or courier service

2. artificial
Answer: (K) Made by human skill or technology, not natural

3. screenshots
Answer: (D) Images captured to show what is displayed on a screen

4. haiku
Answer: (N) A type of short Japanese poem

5. WhatsApp
Answer: (F) A messaging app that lets you send texts, make calls, and share files

6. incidents
Answer: (O) Events or occurrences, especially ones that are noteworthy

7. limits
Answer: (M) Boundaries or maximum extents

8. swear
Answer: (A) To use offensive or rude language

9. updating
Answer: (L) Making changes to software to improve it or fix problems

10. attention
Answer: (J) The act of noticing or paying close attention to something

11. risks
Answer: (H) Possible dangers or negative outcomes

12. powered
Answer: (E) Driven or operated by a force or energy source

13. dealership
Answer: (C) A place that sells cars

14. chatbot
Answer: (G) A computer program designed to simulate conversation with human users

15. swear
Answer: (A) To use offensive or rude language
Go back to questions ⇧

🔡✅ Multiple Choice Answers:

1. What caused the problem with DPD’s chatbot?
Answer: (a) An update

2. How did DPD fix the issue with their chatbot?
Answer: (c) They turned off the part of the chatbot that caused the problem

3. How did the customer, Ashley Beauchamp, share the chatbot’s mistakes?
Answer: (c) By posting screenshots on social media

4. What did the chatbot write a haiku about?
Answer: (b) How much it didn’t like DPD

5. Why did DPD have other ways for customers to contact them?
Answer: (a) In case the chatbot makes a mistake

6. What did Snap warn users about their chatbot?
Answer: (d) It might give wrong or harmful information

7. What did a car dealership’s chatbot agree to do?
Answer: (b) Sell a car for only one dollar

8. What do incidents like these show about AI-powered chatbots?
Answer: (d) There are limits and risks when using them
Go back to questions ⇧

🕵️✅ True or False Answers:

1. Other companies have also had problems with their chatbots giving wrong or harmful information. (Answer: True)
2. A customer named Ashley Beauchamp did not share any screenshots of his conversation with the chatbot on social media. (Answer: False)
3. DPD had a problem with their online chatbot that uses artificial intelligence (AI) to answer questions. (Answer: True)
4. The chatbot did not write a haiku about how much it didn’t like the company. (Answer: False)
5. They are not updating their system to prevent similar mistakes from happening again. (Answer: False)
6. After an update, the chatbot continued to act normally and did not say anything bad about DPD. (Answer: False)
7. DPD fixed the issue by turning off the part of the chatbot that caused the problem. (Answer: True)
8. This incident highlights the risks and limits of using AI-powered chatbots. (Answer: True)
Go back to questions ⇧

How about these other Level 3 articles?

Google Partners with University of Cambridge for Responsible AI Development

Google’s president for Europe, the Middle East, and Africa, Matt Brittin, stresses the importance of responsible development in AI technology, as the company partners with the University of Cambridge to establish the Centre for Human-Inspired AI, focusing on areas such as robotics, healthcare, and climate change.

UK Urged to Recycle Fast Tech E-Waste

Nearly half a billion small electrical items, known as “Fast Tech,” were thrown away in the UK last year, making them the fastest-growing type of e-waste.

Feedback