Paul Holmes -
University
English Professor

Free Materials For ESL Teachers and Learners

1:1 Online English classes. Find out more

English Newsroom

Learn English through news articles - complete lesson plans, including articles, listening, classroom activities, quiz questions and more!

OpenAI’s ChatGPT Tool Raises Concerns About Cybercrime

   

        Try this article at a different level? 
        Level 1 
        Level 2
        Level 4
   

 

A recent investigation by BBC News found that a tool called ChatGPT, made by OpenAI, can be used by scammers and hackers. ChatGPT lets people create their own AI assistants, and BBC News used it to make a bot that can write convincing emails, texts, and social media posts for scams and hacks. The bot was able to make very believable content for different scams and in different languages in just a few seconds. OpenAI says they are working on making their system safer and stopping people from misusing it. But experts think OpenAI isn’t watching the tool as closely as they should, which could give criminals access to powerful AI technology.

BBC News tested the bot by asking it to make content for five well-known scams and hacks. The bot successfully made convincing texts for scams like the “Hi Mum” text scam, Nigerian-prince emails, “smishing” texts, crypto-giveaway scams, and spear-phishing emails. The public version of ChatGPT refused to make most of the content, but the bot made almost everything. OpenAI had promised to check the tool to stop people from using it for bad things, but experts think they aren’t doing a good job of checking the custom versions of the tool.

Using AI technology for bad purposes is a big worry, and cyber experts are warning about the risks. Scammers are already using big language models to make scams that are harder to spot. OpenAI’s GPT Builder tool could give criminals access to even better bots. Experts say that letting people make bots without any rules could be a dream come true for criminals. We’ll have to wait and see if OpenAI can control the use of custom bots effectively.

OpenAI said they are working on making their tools safer based on what users say, and they want to make an App Store-like service for the bots, so people can share and sell what they make. But it’s clear that they need to do more to stop AI technology from being used for cybercrime.

Original news source: ChatGPT builder helps create scam and hack campaigns (BBC)

Listen:

Slow

Normal

Fast

Vocabulary:

1investigationThe act of trying to find out the truth about something
2scammersPeople who trick others to steal their money or personal information
3hackersPeople who break into computer systems to steal information or cause damage
4convincingMaking someone believe that something is true or real
5contentThe information or material that is created or shared
6misusingUsing something in the wrong way or for a bad purpose
7criminalsPeople who commit crimes
8well-knownFamiliar to many people
9refusedNot agreeing to do something
10promisedMaking a statement that something will definitely happen
11cyber expertsPeople who are experts in the field of computers and the internet
12risksThe chances of something bad happening
13language modelsPrograms that can generate text or speech that sounds like it was written by a human
14effectivelyIn a way that produces the intended result
15cybercrimeCriminal activity that takes place on the internet

Group or Classroom Activities

Warm-up Activities:

– News Summary
Instructions: In pairs, students will read the article and then write a summary of the main points. They should focus on capturing the key information and presenting it concisely. Afterward, each pair will share their summary with the class.

– Opinion Poll
Instructions: In this activity, students will work in groups of four. Each group will discuss the following question: “Do you think OpenAI is doing enough to prevent the misuse of ChatGPT by scammers and hackers?” Each student should share their opinion and provide reasons to support it. Afterward, the group will come to a consensus and one student will present their group’s opinion to the class.

– Vocabulary Pictionary
Instructions: Divide the class into two teams. Each team will take turns selecting a vocabulary word from the article without showing it to the other team. The selected student must then draw a picture to represent the word while their team tries to guess it. The team that guesses correctly earns a point. Repeat until all words have been used or a time limit is reached. The team with the most points wins.

– Pros and Cons
Instructions: Students will work individually to make a list of the pros and cons of AI technology like ChatGPT being available to the public. They should consider both the positive and negative aspects of this technology. Afterward, students will pair up and discuss their lists, sharing their thoughts and engaging in a conversation about the topic.

– Future Predictions
Instructions: In pairs, students will discuss and make predictions about the future of AI technology and its impact on society. They should consider how advancements in AI could be both beneficial and detrimental. Each student will share their predictions with the class, and a class discussion will follow to explore different perspectives and ideas.

Comprehension Questions:

1. What is ChatGPT and what can it be used for?
2. How did BBC News test the bot created by ChatGPT?
3. Which scams and hacks did the bot successfully create content for?
4. What concerns do experts have about the use of AI technology for bad purposes?
5. Why do experts think that OpenAI needs to do a better job of checking the custom versions of the tool?
6. What are cyber experts warning about in relation to the use of big language models?
7. What could OpenAI’s GPT Builder tool potentially give criminals access to?
8. What steps is OpenAI taking to make their tools safer and control the use of custom bots effectively?
Go to answers ⇩

Listen and Fill in the Gaps:

A recent investigation by BBC News found that a tool called ChatGPT, made by OpenAI, can be used by scammers and hackers. ChatGPT lets people (1)______ their own AI (2)______, and BBC News used it to make a bot that can write convincing emails, texts, and (3)______ media posts for scams and hacks. The bot was able to make very believable content for (4)______ scams and in different languages in just a few seconds. OpenAI says they are working on making their system safer and stopping people from misusing it. But experts (5)______ OpenAI isn’t watching the tool as closely as they should, which could give (6)______ access to powerful AI technology.

BBC News tested the bot by asking it to make content for five well-known scams and (7)______. The bot successfully made convincing texts for scams like the “Hi Mum” text scam, Nigerian-prince (8)______, “smishing” texts, crypto-giveaway scams, and spear-phishing emails. The public version of ChatGPT refused to make most of the content, but the bot made almost everything. OpenAI had (9)______ to check the tool to stop people from (10)______ it for bad things, but experts think they aren’t doing a good job of checking the custom versions of the tool.

Using AI technology for bad purposes is a big worry, and cyber experts are warning about the risks. Scammers are already using big language models to make (11)______ that are harder to spot. OpenAI’s GPT (12)______ tool could give criminals access to even better bots. Experts say that letting people make bots without any (13)______ could be a dream come true for criminals. We’ll have to wait and see if (14)______ can control the use of custom bots effectively.

OpenAI said they are working on making their tools (15)______ based on what (16)______ say, and they want to make an App Store-like service for the bots, so people can share and sell what they make. But it’s clear that they need to do more to stop AI technology from being used for cybercrime.
Go to answers ⇩

Discussion Questions:

Students can ask a partner these questions, or discuss them as a group.

1. What is an AI assistant and what can it do?
2. How would you feel if you received a convincing scam email or text? Why?
3. Do you think it is dangerous that scammers can use AI technology? Why or why not?
4. What are some examples of scams mentioned in the article? Have you heard of any of these before?
5. How do you think scammers can use AI technology to make their scams more convincing?
6. Do you think OpenAI is doing enough to prevent their tool from being misused? Why or why not?
7. Why do you think experts believe that letting people create bots without rules could be a dream come true for criminals?
8. How do you think AI technology can be used for positive purposes?
9. Do you think OpenAI’s idea of creating an App Store-like service for bots is a good idea? Why or why not?
10. Have you ever encountered a scam or hack online? What happened?
11. Do you think the use of AI technology in scams will increase in the future? Why or why not?
12. How do you think AI technology can be used to prevent cybercrime instead of facilitating it?
13. Do you think it is important for companies like OpenAI to prioritize the safety of their tools? Why or why not?
14. How do you think AI technology can be regulated to prevent misuse?
15. What steps do you think individuals can take to protect themselves from scams and hacks?

Individual Activities

Vocabulary Meanings:

Match each word to its meaning.

Words:
1. investigation
2. scammers
3. hackers
4. convincing
5. content
6. misusing
7. criminals
8. well-known
9. refused
10. promised
11. cyber experts
12. risks
13. language models
14. effectively
15. cybercrime

Meanings:
(A) Familiar to many people
(B) Programs that can generate text or speech that sounds like it was written by a human
(C) People who are experts in the field of computers and the internet
(D) The information or material that is created or shared
(E) Using something in the wrong way or for a bad purpose
(F) Criminal activity that takes place on the internet
(G) The chances of something bad happening
(H) Making a statement that something will definitely happen
(I) People who commit crimes
(J) People who break into computer systems to steal information or cause damage
(K) Making someone believe that something is true or real
(L) In a way that produces the intended result
(M) Not agreeing to do something
(N) The act of trying to find out the truth about something
(O) People who trick others to steal their money or personal information
Go to answers ⇩

Multiple Choice Questions:

1. What is the name of the tool that scammers and hackers can use to create AI assistants?
(a) OpenAI
(b) BBC News
(c) GPT Builder
(d) ChatGPT

2. What can the bot created by ChatGPT do?
(a) Write convincing emails, texts, and social media posts for scams and hacks
(b) Protect users from scams and hacks
(c) Create AI assistants for legitimate purposes
(d) Monitor cybercrime activities

3. How quickly can the bot create believable content for scams and hacks?
(a) In a few minutes
(b) In just a few seconds
(c) In a few hours
(d) In a few days

4. What did OpenAI promise to do to prevent misuse of the tool?
(a) Create an App Store-like service for the bots
(b) Make the tool safer based on user feedback
(c) Check the tool to stop people from using it for bad things
(d) Sell the tool to the public

5. What types of scams did the bot successfully create content for?
(a) Phishing emails, credit card scams, and identity theft
(b) Lottery scams, romance scams, and investment scams
(c) “Hi Mum” text scam, Nigerian-prince emails, “smishing” texts, crypto-giveaway scams, and spear-phishing emails
(d) Email scams, phone scams, and social media scams

6. What are cyber experts warning about regarding the use of AI technology for scams?
(a) The potential for criminals to gain access to even better bots
(b) The risks of using big language models to make scams that are harder to spot
(c) The lack of rules in creating bots, which could benefit criminals
(d) All of the above

7. What do experts think OpenAI needs to do more of to prevent AI technology from being used for cybercrime?
(a) Control the use of custom bots effectively
(b) Make the tools safer based on user feedback
(c) Monitor cybercrime activities closely
(d) Sell the tools to legitimate users only

8. What does OpenAI want to do with their bots in the future?
(a) Make the bots available for free to the public
(b) Improve the bots’ ability to detect scams and hacks
(c) Develop stricter rules for the use of AI technology
(d) Create an App Store-like service for people to share and sell what they make

Go to answers ⇩

True or False Questions:

1. OpenAI claims to be working on improving the safety of their system and preventing misuse, but experts believe they are not monitoring the tool closely enough, potentially giving criminals access to powerful AI technology.
2. BBC News tested the bot by requesting content for five well-known scams and hacks, but it failed to generate convincing texts for any of them.
3. While the public version of ChatGPT refused to create most of the content, the bot was unable to generate almost everything.
4. The bot was able to produce believable content for various scams and in different languages within seconds.
5. ChatGPT allows users to create their own AI assistants, and BBC News used it to create a bot that can generate convincing content for scams and hacks.
6. Cyber experts are concerned about the use of AI technology for malicious purposes, as scammers are already utilizing language models to create harder-to-detect scams. OpenAI’s GPT Builder tool could provide criminals with even more advanced bots, which is a major concern.
7. OpenAI had promised to monitor the tool to prevent its misuse, and experts believe they are effectively checking the custom versions of the tool.
8. BBC News discovered that a tool called ChatGPT, created by OpenAI, cannot be exploited by scammers and hackers.
Go to answers ⇩

Write a Summary:

Write a summary of this news article in two sentences.




Writing Questions:

Answer the following questions. Write as much as you can for each answer.

1. What is ChatGPT and how can it be misused by scammers and hackers?
2. How did BBC News test the effectiveness of the bot in creating convincing content for scams and hacks?
3. Why are experts concerned about OpenAI’s lack of oversight on the use of the custom versions of ChatGPT?
4. According to the article, why is the use of AI technology for bad purposes a big worry?
5. What steps is OpenAI taking to make their tools safer and prevent the misuse of AI technology?

Answers

Comprehension Question Answers:

1. What is ChatGPT and what can it be used for?
ChatGPT is a tool created by OpenAI that allows people to create their own AI assistants. It can be used to generate text for various purposes, such as writing emails, texts, and social media posts.

2. How did BBC News test the bot created by ChatGPT?
BBC News tested the bot by asking it to create content for five well-known scams and hacks.

3. Which scams and hacks did the bot successfully create content for?
The bot successfully created content for scams such as the “Hi Mum” text scam, Nigerian-prince emails, “smishing” texts, crypto-giveaway scams, and spear-phishing emails.

4. What concerns do experts have about the use of AI technology for bad purposes?
Experts are concerned that the use of AI technology for bad purposes, such as scams and hacks, can make it harder to detect and prevent such activities.

5. Why do experts think that OpenAI needs to do a better job of checking the custom versions of the tool?
Experts believe that OpenAI needs to do a better job of checking the custom versions of the tool because they are worried that criminals could misuse the powerful AI technology if it is not closely monitored.

6. What are cyber experts warning about in relation to the use of big language models?
Cyber experts are warning about the risks associated with the use of big language models, as scammers are already using them to create scams that are more difficult to identify.

7. What could OpenAI’s GPT Builder tool potentially give criminals access to?
OpenAI’s GPT Builder tool could potentially give criminals access to powerful AI bots, which they can use to carry out scams and hacks more effectively.

8. What steps is OpenAI taking to make their tools safer and control the use of custom bots effectively?
OpenAI is working on making their tools safer based on user feedback. They also plan to create an App Store-like service for the bots, where people can share and sell their creations. These steps aim to better control the use of custom bots and prevent them from being misused for cybercrime.
Go back to questions ⇧

Listen and Fill in the Gaps Answers:

(1) create
(2) assistants
(3) social
(4) different
(5) think
(6) criminals
(7) hacks
(8) emails
(9) promised
(10) using
(11) scams
(12) Builder
(13) rules
(14) OpenAI
(15) safer
(16) users
Go back to questions ⇧

Vocabulary Meanings Answers:

1. investigation
Answer: (N) The act of trying to find out the truth about something

2. scammers
Answer: (O) People who trick others to steal their money or personal information

3. hackers
Answer: (J) People who break into computer systems to steal information or cause damage

4. convincing
Answer: (K) Making someone believe that something is true or real

5. content
Answer: (D) The information or material that is created or shared

6. misusing
Answer: (E) Using something in the wrong way or for a bad purpose

7. criminals
Answer: (I) People who commit crimes

8. well-known
Answer: (A) Familiar to many people

9. refused
Answer: (M) Not agreeing to do something

10. promised
Answer: (H) Making a statement that something will definitely happen

11. cyber experts
Answer: (C) People who are experts in the field of computers and the internet

12. risks
Answer: (G) The chances of something bad happening

13. language models
Answer: (B) Programs that can generate text or speech that sounds like it was written by a human

14. effectively
Answer: (L) In a way that produces the intended result

15. cybercrime
Answer: (F) Criminal activity that takes place on the internet
Go back to questions ⇧

Multiple Choice Answers:

1. What is the name of the tool that scammers and hackers can use to create AI assistants?
Answer: (d) ChatGPT

2. What can the bot created by ChatGPT do?
Answer: (a) Write convincing emails, texts, and social media posts for scams and hacks

3. How quickly can the bot create believable content for scams and hacks?
Answer: (b) In just a few seconds

4. What did OpenAI promise to do to prevent misuse of the tool?
Answer: (c) Check the tool to stop people from using it for bad things

5. What types of scams did the bot successfully create content for?
Answer: (c) “Hi Mum” text scam, Nigerian-prince emails, “smishing” texts, crypto-giveaway scams, and spear-phishing emails

6. What are cyber experts warning about regarding the use of AI technology for scams?
Answer: (b) The risks of using big language models to make scams that are harder to spot

7. What do experts think OpenAI needs to do more of to prevent AI technology from being used for cybercrime?
Answer: (a) Control the use of custom bots effectively

8. What does OpenAI want to do with their bots in the future?
Answer: (d) Create an App Store-like service for people to share and sell what they make
Go back to questions ⇧

True or False Answers:

1. OpenAI claims to be working on improving the safety of their system and preventing misuse, but experts believe they are not monitoring the tool closely enough, potentially giving criminals access to powerful AI technology. (Answer: True)
2. BBC News tested the bot by requesting content for five well-known scams and hacks, but it failed to generate convincing texts for any of them. (Answer: False)
3. While the public version of ChatGPT refused to create most of the content, the bot was unable to generate almost everything. (Answer: False)
4. The bot was able to produce believable content for various scams and in different languages within seconds. (Answer: True)
5. ChatGPT allows users to create their own AI assistants, and BBC News used it to create a bot that can generate convincing content for scams and hacks. (Answer: True)
6. Cyber experts are concerned about the use of AI technology for malicious purposes, as scammers are already utilizing language models to create harder-to-detect scams. OpenAI’s GPT Builder tool could provide criminals with even more advanced bots, which is a major concern. (Answer: True)
7. OpenAI had promised to monitor the tool to prevent its misuse, and experts believe they are effectively checking the custom versions of the tool. (Answer: False)
8. BBC News discovered that a tool called ChatGPT, created by OpenAI, cannot be exploited by scammers and hackers. (Answer: False)
Go back to questions ⇧

How about these other Level 3 articles?

Google Partners with University of Cambridge for Responsible AI Development

Google’s president for Europe, the Middle East, and Africa, Matt Brittin, stresses the importance of responsible development in AI technology, as the company partners with the University of Cambridge to establish the Centre for Human-Inspired AI, focusing on areas such as robotics, healthcare, and climate change.

🎉 Coming Soon ⏳

🧑‍💻 1 to 1 Online English Classes 👩🏻‍💻

Practice English through Zoom with me, Paul, or one of the teachers I have chosen for you. Suitable for all ages and levels.

Choose the teacher and time you want. 🧑🏻‍🏫👩‍🏫⌚️

You can learn:
– English writing skills (essays, emails, etc.) 
– English conversation practice 
– English pronunciation practice
– Textbook classes
Anything you want

Introductory prices:
60 minute class with native speaker (e.g. Paul): US$34.99 US$24.99
60 minute class with non-native speaker (e.g. Philippines, Indonesia): US$9.99 US$7.99

Interested? Please fill in the form and I will contact you soon!

Feedback