Assessing the brain at the Barbican, London, UK. Image by Tim Sandle.
New artificial intelligencegenerated images that appear to be one thing, but which resemble something else entirely when rotated, have aided scientists in testing the human mind.
Johns Hopkins University perception researchers have addressed a longstanding need for uniform stimuli to rigorously study how people mentally process visual information. The work was supported by the National Science Foundation Graduate Research Fellowship Program.
“These images are really important because we can use them to study all sorts of effects that scientists previously thought were nearly impossible to study in isolation—everything from size to animacy to emotion,” says lead researcher Tal Boger in a statement sent to .
The researchers adapted a new AI tool to create “visual anagrams” (written as orthogonal transformations). Visual anagrams are images that look like something else when rotated. The visual anagrams the team created include a single image that is both a bear and a butterfly, another that is an elephant and a rabbit, and a third that is both a duck and a horse.
The initial experiments explored how people perceive the realworld size of objects. Realworld size has posed a longstanding puzzle for perception scientists, because one can never be certain if subjects are reacting to an object’s size or to some other more subtle visual property like an object’s shape, colour or fuzziness.
People perceive the realworld size of objects by integrating visual information with distance cues, a phenomenon called size constancy, which allows the brain to maintain a stable perception of an object’s true size despite its varying retinal image size.
With the visual anagrams, the researchers found evidence for many classic realworld size effects, even when the large and small objects used in their studies were just rotated versions of the same image.
For example, previous work has found that people find images more aesthetically pleasing when they are depicted in ways that match their realworld size—preferring, say, pictures of bears to be bigger than pictures of butterflies. Boger and Firestone found that this was also true for visual anagrams: When subjects adjusted the bear image to be its ideal size, they made it bigger than when they adjusted the butterfly image to be its ideal size—even though the butterfly and the bear are the very same image in different orientations.
The scientists hope to use visual anagrams to study how people respond to animate and inanimate objects and expects the technique to have many possible uses for future experiments in psychology and neuroscience.
Animate and inanimate objects are processed in different areas of the brain too, so it is possible to make anagrams that look like a truck in one orientation but a dog in another orientation. The approach is quite general, and we can foresee researchers using it for many different purposes.
The research will be published in an upcoming issue of the journal Current Biology.
