
Artificial Intelligence
What is Artificial Intelligence?
Artificial Intelligence (AI) is a field of computer science focused on creating systems that perform tasks typically requiring human intelligence, such as visual perception, speech recognition, decision-making, and problem-solving. AI is divided into two sub-fields: Machine Learning (e.g., email filters, self-driving cars) and Deep Learning (e.g., voice assistants like Siri, AI models like ChatGPT).

A common misconception is that AI mimics human thinking. In reality, AI uses algorithms to recognize patterns and predict outcomes. This means AI can carry biases based on its training data. It’s crucial that AI be used as a tool to enhance critical thinking, not just as a simple “question and answer” system like Google.

How is AI Being Utilized to Commit Cyber Crimes?
While AI has had a hand in promoting greater efficiency in people's lives, the FBI warns that AI provides augmented and enhanced capabilities to schemes by terrorist organizations and criminals already use and increases cyber-attack speed, scale, and automation. Cybercriminals leverage publicly available and custom-made AI tools to orchestrate highly targeted phishing campaigns, exploiting the trust of individuals and organizations alike.
It is essential to be prepared for a range of potential threats, like voice cloning that can cause a loved one to reveal sensitive information to the manipulation of a person’s photo to generate a deepfake to exploit them. SafeOC is committed to equipping individuals with the necessary knowledge and resources to address these rapidly evolving challenges effectively.
Types of Threats


Also known as voice synthesis or voice mimicry, it utilizes AI technology that allows individuals to replicate someone else’s voice with remarkable accuracy. While initially developed for benign purposes such as voice assistants and entertainment, it has also become a tool for malicious actors seeking to exploit unsuspecting victims.


Generated videos or audio clips that make it appear as though someone is saying or doing something they never did. Deepfakes can be used to defame individuals and commit fraud.


Creation of fake online accounts to target children with the intent to commit sexual offenses


CSAM is sexual abuse of a prepubescent child that is documented with photos and videos. Abusers with access to one CSAM image can then utilize AI software to generate hundreds of images positioning the child victim in different scenes. Bad actors can also take any publicly available images of children and create AI-generated CSAM with their likeness, also known as “deepfakes.” These images can then be used for grooming or sextortion.

如何舉報
If you or someone you know has a sexually explicit image prior to being 18 years old, whether real or AI-generated, circulating online, the National Center for Missing & Exploited Children (NCMEC) has a program called Take It Down that can help. This tool allows individuals to anonymously request the removal of explicit images from participating platforms.
To learn more about local reporting, click below
To submit your request for removal, click below
AI Safety Video
Check out our video resources explaining how you can recognize AI-generated images or videos and tips on how to stay safe while online:
