hum is an AI companion for researchers using AI tools. It surfaces the moment where a reachable shape is being mistaken for legitimate ground — especially when an AI assistant helped you get there.
Most AI-assisted research errors are not hallucinations in the dramatic sense. They are substitutions: an AI tool encounters a term or question with no established referent and quietly replaces it with a nearby familiar concept — then answers the substituted question as if it were the original.
The AI is not fabricating. It is converting your question into one it can answer, then presenting that answer as if no conversion occurred. The output is fluent, locally coherent, and often factually accurate — about the question it actually answered, not the one you asked. The researcher accepts the answer and moves forward. The gap is never surfaced.
hum is in closed pilot. if you are a researcher using AI tools and want to know when to trust the output, email us.