Table of Contents
TerpAI
TerpAI is the name for DIT's robust web-based console for managing cloud infrastructure and general-purpose Generative AI (GenAI) Chatbots. The platform and chatbots integrate generative AI capabilities using Generative Pre-trained Transformer (GPT) technology. This platform ensures efficient and secure management of cloud resources and enhances user interaction through AI-driven chatbots. TerpAI Leverages cutting-edge technologies such as Natural Language Processing (NLP), Large Language Models (LLMs), and Retrieval-Augmented Generation (RAG). Utilizing NLP, the platform enables chatbots to understand and interpret human language with high accuracy, facilitating natural and intuitive interactions. By harnessing the power of LLMs like GPT-3.5 and GPT-4o, TerpAI chatbots generate contextually relevant, coherent responses and provide meaningful insights that enhance user experience and engagement.
Key Capabilities of Generative AI Chatbots
Generative Pre-trained Transformer (GPT)
GPT is an advanced AI language model developed by OpenAI that can understand and generate human-like text. It can assist with tasks such as drafting emails, writing reports, and answering questions by predicting and creating relevant text based on the input it receives.
Large Language Models (LLMs)
LLMs, or Large Language Models, are advanced AI systems designed to understand and generate human-like text by processing vast amounts of data. They can be used for various applications, such as customer support, content creation, and language translation, by predicting and generating relevant text based on the given input.
Natural Language Processing (NLP)
NLP allows the chatbot to understand and generate human-like text. It processes and analyzes large amounts of natural language data to respond in a way that is contextually relevant and coherent. This capability enables the chatbot to engage in conversations, answer questions, and perform language-based tasks with a high level of proficiency.
Retrieval-Augmented Generation (RAG)
RAG is a technique that combines the power of large language models like ChatGPT, Claude with external information retrieval. It enables the chatbot to pull in information from a Domain knowledge of data particular to the departments the bot is used in, to supplement its pre-trained knowledge. This is particularly useful for answering questions that require up-to-date or specialized information that is grounded in the curated KB.
Understanding complex queries
AI chatbot is equipped to understand and respond to complex queries. This means the bot can handle multi-part questions, infer the intent behind a query, and provide answers that consider all aspects of the question. This is achieved through sophisticated algorithms that analyze the structure and semantics of the query.
Context retention
This refers to the chatbot's ability to remember and utilize the context of a conversation over up to two exchanges. It can recall earlier parts of the conversation and use this information to make responses more relevant and personalized. Context retention is crucial for maintaining a coherent and logical flow in conversations.
Key Benefits of Generative AI Chatbots
Productivity enhancing use cases
- Answer questions in a conversational manner (with text or voice chat)
- Generate blog posts and other kinds of short- and long-form content
- Edit content for tone, style, and grammar
- Summarize long passages of text
- Translate text to different languages
- Brainstorm ideas
- Create and analyze images
- Answer questions about charts and graphs
- Write code based on design mockups
24/7 availability
- AI chatbots provide round-the-clock assistance, answering student, faculty, and staff queries anytime, which is especially beneficial for universities with a large or international student body across different time zones.
Instant response
- Chatbots provide immediate answers to user queries, significantly reducing wait times compared to human-operated services. This also reduces the admin staff overload via email and call volume.
Handling high volume of interactions
- During peak periods like admissions or exam seasons, chatbots can efficiently handle a high volume of student interactions, reducing the pressure on the staff.
Cost-effectiveness
- Chatbots help save on labor costs and resource allocation by automating responses to common queries.
Scalability
- Chatbots can easily scale up to handle an increasing number of interactions without needing additional significant resources, unlike human-operated services, which would require more staff.
Key Challenges of Generative AI Chatbots
Understanding context and nuance
- AI Chatbots have limited capability to understand the context and the nuances of human conversation. They might misinterpret sarcasm, idioms, or complex sentences, leading to irrelevant responses.
Dependency on quality and diversity of training data
- The performance of an AI chatbot heavily depends on the quality and diversity of the data it was trained on. Biases in the training data can lead to biased responses, which can be problematic, especially in sensitive topics.