AI FAQ
WHat is AI and how does it work
What is AI?
AI (Artificial Intelligence) is a loosely used term that has gained popularity with the emergence of OpenAI’s completion models, specifically, ChatGPT (Generative Pretrained Transformers).
In the most basic sense, GPT models are very sophisticated predictive text tools. When you're writing a prompt, it will suggest the next most likely, probable word or phrase based on what it knows within its capacity as a pre-trained model. It's just guessing what the highest probability of the next word will be in a given sequence of words.
What is prompting?
Prompting is when you give GPT models a collection of words to complete or expand on. For example, if you type “Humpty Dumpty sat on the …?” as the prompt, a GPT will - very predictably - respond with “wall”.
In a practical sense, GPT can be used to ask a question and receive a detailed answer, but you can also partially provide words that you would want it to complete. To a degree, you can influence the output based on what you are prompting. How you phrase your question or the order in which you provide the prompt words will generate an output that the GPT has been pre-trained to give.
A GPT does not comprehend what you're putting in. The model’s objective is to complete a prompt - whether the outcome is correct or not.
How can I use AI on the platform?
If you generate a prompt result, it can be transformed into content - an article or a story in a chapter, for example. You can also use the output to source appropriate media based on keywords in the generated text.
What are tokens/words within the context of AI prompting?
There are various definitions depending on the AI technology being used, but they all generally utilise tokens to count words (four-letter words on average).
How can instructional designers and content authors make good use of GPT? What is best practice?
Content authors only need to provide a topic written as a simple phrase. Complicated or long prompts increase the risks that an output will be unpredictable or inaccurate.
If you want more detail, you need to prompt several subtopics (a little trick is to ask ChatGPT to provide you with those sub-topic prompts).”
If you provide a general topic you're going to get a general overview with fewer details. The trick lies in prompting specificity, e.g. prompting “anatomy of a tarantula spider” or “spider habitats” rather than "spiders", would produce more detailed multi-learning content that can then be generated into multiple learning stories.
Authors can adjust the temperature of their output. The lower the temperature, the more deterministic and predictable the output. For example, if you prompt" “Humpty Dumpty sits on the ...” and the temperature is right down, it's going to output “wall” every single time as it is more deterministic in terms of the possible outcomes.
If the temperature is increased, it gets more random, and it might output: “Humpty Dumpty sits on the cat” or “Humpty Dumpty sits on the wall”, and then “Humpty Dumpty sits on the dog” - it's just less predictable and more random. Randomness is best used for more “creative” outputs - poetry for example.
What are the ethical considerations in terms of referencing, fact checking and plagiarism when using GPT to generate content?
AI references the open Internet and it's essentially like a search engine that is pre-trained on data from somewhere on the World Wide Web. For this reason, there is a risk of plagiarism. GPT models can sometimes output direct quotes from its sources, which is a concern. Yes, the model will evolve over time and this may change, but for now it must be considered.
Authors have a responsibility to add a note at the beginning or at the end of the output text to indicate that it has been generated by AI. That would be ethically correct - whether it's on a news site, learning module or anywhere else.
Authors can also consider indicating whether or not the output has been checked by a subject matter expert. AI doesn't (currently) know the difference between fact and fiction and the output is only valuable if it has been verified by a professional in the field at hand. If AI was the main writer and the main conceptual producer of the topic, then it should be noted.
Last updated