Skip to Main Content

Embracing AI

Evaluating AI results

On this page:

As AI becomes more popular, it's important to be able to critically evaluate its results.

Even if you already know your stuff, you might have to do some detective work to make sure you can rely on the information.

Use this page to learn how to spot biases, question the limitations of AI, and understand the importance of factchecking.

What is the SIFT method?

The SIFT method is a short checklist designed to help us sort fact from misinformation. The process has four "moves":

  1. Stop
  2. Investigate the source
  3. Find better sources
  4. Trace the original

Stop

Think about how people online often share things because they think it will impress their followers, or because it's easier to hop on a bandwagon than it is to check facts.

Generative AI can also be like this. It can look very impressive, and it is tempting to see it as a ready-made answer that you can copy and paste. This is why the first thing you should do with AI results is Stop and ask yourself a few questions:

  • Am I being too quick to accept what I'm being told?
  • Am I thinking rationally, or do I feel emotional or rushed?
Red man at a pedestrian crossing

💡Top Tip : Being able to pause and reflect is also a valuable skill in your workplace and personal life.

Investigate the source

Step 2 is to investigate the source. This means you should find out more about the website or page you are on to help you decide how reliable it is. Some common problems for most generative AI tools are logical inconsistency, certain types of bias, and unreliable training data.

Large Language Models like ChatGPT are great at recreating human language, but they aren't always able to follow a logical process. For example, it may give you two contradictory pieces of evidence, but say that they mean the same thing.

  • Read it all the way through. Does it change its argument part way through, or does it stick to the point?
  • Look at each paragraph and argument. Does the evidence add up?
  • Think of what you already know about this subject. Do you recognise any specialist terminology? If so, is the language used correctly?

Bias happens when some factors skew the interpretation of evidence, or when some contexts are left out altogether.

Many AI models are trained on data from the internet. The internet often doesn't reflect reality (think of Instagram vs. real life), so AI can be very prone to bias.

  • Societal bias. Does this answer rely on stereotypes?
  • Geographical bias. A lot of training data may be international, especially American. Can I trust it to accurately explain UK laws or describe the viewpoints of minorities? Does it use Americanisms?

It is hard to know how reliable an AI tool is without knowing a lot about how machine learning works. However, there are a few basic questions you can ask yourself:

  • Does the tech company use experts to do any quality checks on responses?
  • Can the tool use information from other sources to "ground" itself? What are they? Can it search the internet, or specialist databases?
  • Does the company have a public ethics or transparency policy?
  • What kind of constraints has the tech company put on what the AI can show you?

Find better sources

The most important step is to find better sources for the information. In chatbots like ChatGPT and Copilot, they might provide some links to other sources of information; however, just because the sources are there doesn't mean they're accurate or reliable either. Here's what you can do to find better sources:

  • Do a quick Google or Wikipedia search. Does the AI's information line up with initial results? Do Google and Wikipedia have any other sources listed?
  • Follow the links suggested by the AI. Does the website look reliable?
  • Look for academic papers on a trusted platform like EBSCO Discover
  • Check a trusted fact-checking site

💡Top Tip: When creating your prompt, ask the AI to include sources for its claims.

Trace the original

The final check is to trace the origin of data, quotes and claims to their original source.

A detective with a magnifying glass following footprints.
  • Follow the sources quoted by the AI. Are they the original sources? Or are they also reporting it from another source?
  • Have the quotes, claims or wording been taken out of context? Does the original source include other perspectives which are missing from the AI-generated content?
  • Check that dates and timelines make sense, and that claims have not been superseded.
  • For images, try doing a reverse Google Images search or use TinEye.com to find where images have been posted.