Life with AI
Artificial intelligence is here to stay, and it is rapidly becoming integrated into our everyday lives. With this, both innovation and guardrails are needed.
OpenAI. (2025). Futuristic digital tree in a nighttime forest [AI-generated image]. ChatGPT with DALL·E. https://openai.com/dall-e
The above image has been generated through ChatGPT with DALL·E and is cited giving credit to the AI model and its developer, OpenAI, as the creators of the image.
Is AI the cure to online toxicity?
Recently, researchers from the University of South Australia and Bangladesh’s East West University have developed a machine learning model that can identify toxic comments and posts with up to 87% accuracy. This level of accuracy in an artificial intelligence model identifying ‘toxic’ content without censoring discussion represents the opportunity for leaps forward in maintaining digital diplomacy if this technology were integrated into social media and online discussion forums. The identification of ‘toxic’ content, potentially including but not limited to racial prejudice, sexism, trolling, and other forms of harassment or hate speech could give tech companies the ability to regulate inappropriate behavior by users. The researchers optimized the baseline Support Vector Model with an accuracy rate of 69.9% to create the optimized Support Vector Model with an accuracy rate of 87.9%. This is also higher than the comparable Stochastic Gradient Model with an accuracy rate of 83.4%.
Takeaway: Increased accuracy in harmful content detection by the optimized SVM model represents a key opportunity for moderation of harassment, hate speech, and other forms of inappropriate content online, especially with a rise in digital trolling an cyberbullying over the past decade. However, AI moderation of content poses the ethical risk of limiting digital freedom of speech, with more research and legislation needed to regulate the use of AI in content moderation. - Natalie Sherman
Read more about this research from the University of Australia’s website here.
AI and Online Dating
Artificial intelligence has been making its way into everything, and it seems the dating scene is no exception. Seeking to test out the latest AI dating tools, New York Times reporter Eli Tan documented his experiences experimenting with the various AI offerings. Certain dating apps such as Ice allow users to create an AI version of themselves to talk to other users and vet relationships candidates, and Tan’s AI clone went on hundreds of virtual dates with other users’ clones. Tan’s clone was trained on his personal interests and conversation style, allowing it to determine which conversations might result in a good match. Despite its surprising accuracy at mimicking Tan’s mannerisms, he reported that the clone “rarely helped [him] understand the human on the other end, and [he] was disappointed at how formal the conversations felt.” Perhaps unsurprisingly, Ice and a few other similar apps have closed their services. Another, slightly less futuristic offering, is an AI dating coach. Apps like Rizz are intended to be paired with other online dating services and provide feedback on profile development, responding to messages, and gauging a potential partner’s interest levels. Tan tried these services as well, and even went on a few dates in person. Nevertheless, Tan reported that AI “had done little to improve [his] dating life,” noting that the AI apps could only get you to the first date; in person, it’s all on you.
Takeaway: Although apps like Rizz could be helpful in guiding those intimidated by online dating platforms, they should never replace true human connection. Ultimately, something as personal as a romantic relationship – even a potential one – was not meant to be influenced by artificial intelligence. –Grace Ogden
To read more about Eli Tan’s AI dating experience, please visit the New York Times.
AI and Religion
Recently, artificial intelligence has been applied to a rather unexpected field: theology. As religious leaders begin incorporating AI into their work, a new business opportunity has opened up for faith-based tech companies to create AI tools specifically geared for religious purposes. Some of the applications for this technology are widely accepted as beneficial. For example, using chatbots to answer worshipers’ questions about service times, translate livestreamed services into other languages instantaneously, and increase efficiency of theological research are all relatively uncontroversial endeavors. Other uses, however, are more contentious. Cases of writing entire sermons with AI or using an AI voice replication to preach during a service have drawn mixed reactions. Many are concerned about the theological accuracy of this AI technology, which has already been documented hallucinating false quotes during a sermon. Others are more willing to embrace the new technology, arguing that the novelty is attracting new churchgoers during a time of declining attendance.
Takeaway: Artificial intelligence can be a wonderful tool for endless disciplines, and religion is no exception. However, while AI can make many aspects of religion more efficient and accessible, it is important that it does not spread false information or replace the human aspects of religion. In other words, we must find a balance between technology and classical theology. –Grace Ogden
To read more about AI and religion, please visit the New York Times.