By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Accessible Innovation in XR: Maximizing the Curb Cut Effect
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Accessible Innovation in XR: Maximizing the Curb Cut Effect
News

Accessible Innovation in XR: Maximizing the Curb Cut Effect

News Room
Last updated: 2025/09/16 at 11:10 AM
News Room Published 16 September 2025
Share
SHARE

Transcript

Dylan Fox: My talk is Accessible Innovation in XR: Maximizing the Curb Cut Effect. My name is Dylan Fox. I’m the Director of Operations for XR Access at Cornell Tech. Our schedule for today, we’re going to talk first a little bit about what accessibility is. We’re going to talk about curb cuts and how they can drive innovation. We’re going to talk about a few examples of inclusive extended reality using some scholarly papers that have just come out.

Defining Accessibility

Accessibility is a broad topic, and I want to cover three terms we’re going to use throughout this presentation that are very closely related to one another: accessibility, impairment, and disability. Accessibility fundamentally just means that something is usable regardless of your ability. Abilities come in all shapes and sizes. There’s the ability to hear, to see, to move freely, to process information. Every single one of us has vastly different abilities that enable us to do what we do every day. Next, impairment. Impairment is something that hampers our abilities. That could be something that is intrinsic to us, or it could be something external. For example, you could have a vision impairment because you lost a sword fight to a pirate and you got your eye out. Or you could have a visual impairment because it’s dark outside and you can’t see very well.

Finally, disability. Specifically, I’m talking about something called the social model of disability. This is the idea that disability is the mismatch between ability and expectation. Somebody who’s in a wheelchair may have a long-term impairment, but that impairment doesn’t turn into a disability until they come to an obstacle that they can’t cross in that wheelchair, like a building that only has stairs as its entrance. This is compared to the medical model which says, basically, if you don’t look like human standard, two arms, two legs, there’s something intrinsically wrong with you. We tend to move towards the social model that says all of this can be attributed to this mismatch, and that’s something that we can fix in the environment, and we don’t have to fix people.

With those three in mind, let’s talk about who is accessibility for, because I think the number one challenge that I see with talking to people with accessibility, especially at the C-suite level, is that there are a lot of folks that think about accessibility as cherry. They think, there’s some small collection of poor, unfortunate souls out there who have permanent limb loss, or vision loss, or what have you, and accessibility is something we throw this bone to them to make ourselves feel better and comply with legal standards.

That is absolutely not the case. Because while there are people with permanent impairments, here, for example, we have Paralympian Ma Lin of Australia, who’s a gold medal Paralympic champion of table tennis. Somebody like him has a permanent impairment. He has no arm. He lost it at age 5 when he stuck it in a bear cage at a zoo. He has a very distinct set of abilities compared to other people. He could beat every single one of us here at ping pong, but he would also have a lot of challenges doing things that assume you have two hands, two separate mechanisms with which to manipulate things. When people think about accessibility, they often picture somebody like Ma Lin.

Impairment comes not just in the form of permanent impairments, but also in terms of temporary and situational impairments. You could have a mobility impairment because you only have one arm permanently. You could have a mobility impairment because one of your arms is in a sling because you broke it last week trying to imitate Ma Lin playing ping pong. Or it could be a situational impairment where if you are a new parent, you probably have one arm at all times holding a child. That means you would very much benefit from being able to do a lot more things one handed.

That idea that things that benefit people with permanent disabilities also benefit people with temporary and situational impairments is really the crux of this. When we think about who accessibility affects, who is accessibility for, accessibility affects everyone. Yes, even you. There’s an idea that rather than thinking of some people as permanently disabled, it’s better to think of the rest of us as temporarily abled. Because at some point in our lives, either by developing a permanent impairment or simply by dealing with temporary or situational impairments, all of us are going to face disability. We’re going to face that time where we’re in a loud environment and we can’t hear what people are saying. We’re going to face that time when we’re driving, we need to keep our eyes on the road, but our phone is trying to send us a notification that it wants us to look at. All of those are situations that are improved by accessibility.

Curb Cuts and Innovation

Let’s talk then about the curb cut effect. Curb cuts are those little ramps that go from the curb down to the street. They are what enable people in wheelchairs to be able to move smoothly through a city and not have to either be lifted by somebody or have to find driveways and things and wheel down the street half the time in order to get where they’re going. The history of curb cuts is a funny thing, because after World War II, there started to be a lot of veterans who would say, I’m having a lot of trouble here leaving the house and moving around in my wheelchair. That got a few places to install these. What really affected things was the disability civil rights movement.

When in the 1960s you had folks like Ed Roberts, who was one of the first severely disabled people to attend UC Berkeley, he had polio and had to spend half his time in an iron lung. Still, at night, sometimes him and his buddies would in theory go around smacking sidewalks with sledgehammers and throwing up wooden ramps, and telling the city, if you’re going to fix these, you might as well fix them right or else we’ll just keep breaking them.

A funny thing happened, which was that when places started to finally put in these wheelchair ramps, these curb cuts, they figured out, we put these in for people with wheelchairs, but it turns out they’re actually really useful for a lot of people. Useful for people with dollies, or bicycles, or strollers, or any number of wheeled things. The curb cut effect came to be known as this idea that when we design for people with disabilities, we make things better for everyone.

If you want a digital example of that, you can look no further than captions. Captions were originally created for deaf people in order to be able to enjoy television and movies. Nowadays, over 80% of Gen Z viewers use captions while watching TV, according to the BBC. That’s not because all of them have permanent hearing disabilities, it’s because sometimes they prefer to have captions. They may be watching in a loud environment. They may want to watch on mute. They may have English as a second language. Any number of reasons. You’ll find that when you have this feature that was originally aimed at this disabled population, you’ll have a lot more people than you expect that will use it on a consistent basis. There are a few other products that might be on your desk that actually arose from accessibility technology. Keyboards, of course, came from typewriters.

One of the first working typewriters was actually built by an inventor named Pellegrino Turri in 1808, who built it for his blind friend, the Countess Carolina Fantoni da Fivizzano, in order to help her write legibly. Texting. Text telephones were originally made by a deaf dentist named Dr. James Marsters in 1964 to enable deaf people to type messages over phone lines. It’s what a lot of people considered to be the precursor to modern day texting. Audiobooks. The first audiobooks came out in 1932 when the American Federation for the Blind commissioned a whole series on vinyl where you’d have 15 minutes a disc of people reading out things like the U.S. Constitution or the works of Shakespeare. Not terribly long thereafter, the Library of Congress decided, that’s a pretty good idea. We should start supporting these types of alternate formats.

Finally, speech recognition was something that was improved upon at Bell Labs in 1952. It was originally created with people like disability rights activist Margaret Pfrommer, who was paralyzed from the neck down, but then additionally boosted and enhanced by DARPA, who found that pilots often have a situational disability of they’re using their hands to fly the plane. They need another way to interact with their technology. All of these are things that were originally created with solving the needs of disabled users in mind and then found this massive use outside of that context.

One of the principles that underlies this effect is the idea of modularity. Modularity means the ability to mix and match parts of a system. In order to use technology, we get input from a device, we make a decision about what we want to do, and then we output our commands to it.

For example, you might get an input of a phone alert with a text from your friend, and the output would be to open the message. Then you get a new input in the form of the contents of that message, a new output in the form of the response that you send back. The key to making things accessible is to make sure that there are multiple ways to get input and multiple ways to give output, that none of these is a bottleneck. Because when that text message comes in, if you can only read it on your screen and that’s the only way to understand what the text is, then if you’re blind or low vision, you might not be able to text. If you can have a more modular system that says you can read it, or you can listen to it being read out loud using text-to-speech, or you can feel it using something called an adaptive braille display, then all of a sudden that is a much more robust system, both in terms of being able to support disabled people, as well as being able to just suit the preferences of everybody.

Avoiding those bottlenecks, having these modular systems where people can customize, I want the input to come like this, I want the output to go like this, is really vital, because there is no such thing as a one-size-fits-all accessible system. Every single person out there is going to have different abilities and different intersections of disabilities that means something that works best for them is not going to work best for everyone else. Having that modularity, having that flexibility is really vital for making these systems more powerful and accessible in the long run.

Another really important note here that I think a lot of people don’t quite grasp is that when you’re putting in work to make things accessible to people, that same work is also making it accessible to AI, and APIs, and machines, and anything else that you want to plug that data into. Take the example of captions in AltspaceVR. May it rest in peace. Microsoft did a lot of work to enable captions. They had to lay a lot of groundwork and infrastructure for this. They had to get speech-to-text working in all these different areas. They had to set up this thing of having these windows appear in front of everybody. They had to determine the rules for those in social VR situations. It was a fair amount of infrastructure.

A lot of current CEO thinking would say, why should I spend all of these resources, all of this dev time and energy and effort into making this whole system that’s only going to be used by whatever percentage of people are permanently deaf? What you’ll find is that you’re not just doing some type of charitable donation here. You’re building infrastructure that opens up all kinds of other capabilities. Because it turns out, once they had all of this in place for captions, they were one API call away from having real-time translation. Because now that you had just a text stream and a way to distribute it, you could then call a translation API, localize it in the language, and here’s a brand-new feature. A brand-new feature that would not be possible if all they were doing was just piping in voice and then not doing any other processing to it. This is one of those areas where any investment you make in accessibility, you will see a return on in making your app more flexible and more powerful in the long run.

Inclusive Extended Reality (XR)

Let’s now talk a little bit more about accessible extended reality. I’m going to talk about three areas that there are people thinking very hard about how to apply XR to support folks with disabilities. We’re going to talk about how those efforts could lead to big improvements for everybody down the line.

1. AI Agents

First of all, obviously, it’s 2024 at a tech conference, we’re going to talk about AI. One really good use of AI to support disabled folks is in this idea of sighted guides. If you are blind and you’re in a new space, very often you may ask for a sighted guide to support you. This is somebody ideally who’s familiar with the space who you could take them by the elbow, they can guide you around, you can ask questions about the environment, and generally just support your understanding of the area. We thought that would maybe be a pretty good approach to try in VR. We had Jazmin Collins, Crescentia Jung, and a couple other folks at our lab at Cornell Tech were experimenting with sighted guides in VR. Because there’s no good reason we can’t take that tried-and-true accessibility technique in real life and apply it to virtual realities.

The only thing you really need to do is do a few features like being able to grab onto somebody’s elbow and let them tow you around. A little bit of design thinking goes a long way in enabling these tried-and-true techniques. The work they were doing so far has been on sighted guides that are humans. Obviously, there’s a huge opportunity here to have sighted guides that are AI. To have an AI agent that can help guide you around an environment and understand and ask questions about it. That is the type of application that, I know we’re seeing a lot in other places as well, but putting the time and energy into thinking how is this specifically going to work for somebody who is blind, will open up a lot of advances in UX for agents in general.

It doesn’t just apply to virtual reality. It applies to the real world and augmented reality as well. We have on-screen here, one, Be My AI, from Be My Eyes, which is a service that lets you take a photo, run that photo through AI training, basically, and have a conversation with an AI agent about what’s in that photo. This is really powerful. Because if you think about things in terms of previous forms of accessibility of something like a screen reader, you can imagine a picture of a screen reader of a microwave console. The screen reader might read you off every single button on that console. With an AI agent, you could take a photo, “It’s a microwave. Where’s the popcorn button?” “Bottom row, second from the right”.

That ability to bring AI’s ability to discern what’s important in a space and have that natural language conversation with somebody is incredibly powerful. I think we’ve seen that really tangible uses of this, like supporting folks with low vision, can lead to really powerful uses for everybody. I have another example here, RoboGuide. This was Olaoluwa Popoola at the University of Glasgow, is working with taking these not just in the phone, but putting them into robotics as well. You’d have these, in this case, a robotic dog, one of the, I think, Atlas Robotics models, that could utilize a lot of these AI functionalities, but do it in a way that is more embodied in the world and taking on that role of a more traditional seeing eye dog, only being a tiny portion of the cost, because the cost of training and caring for a real seeing eye dog can be very high.

Another really great use of AI for accessibility is in the form of AI training. We see on the left, a still from “ASL champ!”, which is a training program from Shahinur Alam and Lorna Quandt of Gallaudet University, which tackled the problem that if you were trying to learn sign language, for example, because your child is deaf, you can watch videos, but you’ll have no idea if you’re doing it correctly. An AI agent, obviously no substitute for a real teacher, could be spun up on demand and help you understand whether you’re learning things correctly or whether you’re going to be signing like you have the verbal equivalent of marbles in your mouth.

Similarly, on the right, we have something from Natalia Rosenfield at Chapman University called a VR System for Practicing Conversation Skills for Children with Autism. Similarly, say you have a child with autism, you want to give them the opportunity to practice their conversation, oftentimes that’s really just a process of learning how to mask and pretend to be neurotypical, but that’s a separate conversation. Nonetheless, you can have these AI agents that are available on demand that let people move at their own speed and practice more effectively than just having a book or a YouTube. The applications of this for AI agents in general are really quite powerful. You can imagine something like an AI tour guide. Let’s say you’ve made this fancy VR space that has all of these different amenities, all these different functionalities. How do you guide somebody through that space? You can look to the literature of AI sighted guides and get a bunch of better understanding of here’s some things that people care about.

Here are some functionalities like being able to have that actual guidance of, let me hold on to you and you can fly me around. That could really improve that user experience for everybody, not just people with visual disabilities. You can similarly see the expertise that’s gained from these types of studies being applied to self-driving cars, being applied to AI assistants. For visually impaired folks watching, we have images on-screen of Google’s little self-driving cars and Cortana from Halo. All of these different types of things that we’re thinking about putting into broad usage will absolutely benefit from the expertise of creating those edge cases, focusing on assistive uses for disabled folks.

2. Novel Inputs

Number two, let’s talk a little bit about novel inputs, because this is one of those areas that has really benefited from the expertise of folks with lived experience of disability. First up is this idea of multimodal inputs. A lot of the first headsets when it comes to virtual reality, you had exactly one choice about how to interact with them. They would have motion tracked controllers. You’d have to point it, click it. There was such a bottleneck there. If you didn’t have functioning arms, if you didn’t have functioning eyes, if you didn’t have the ability to squeeze your hand reliably and hold things in a certain direction and move around a space, you were locked out of that technology. We’ve started to see some improvements there. Things like the Apple Vision Pro has a lot more modularity in how you can approach it. You can direct it with gaze, with voice.

If you go into the accessibility settings, you can see, doing from left eye or right eye, from head position versus eye movement, pointing with your finger, with your wrist, with your body. It’s still not perfect because I have also on-screen here a fellow who has severe limb differences, named Ryan Hudson-Peralta, runs the Equal Accessibility Channel. He tried to set it up without hands, it turns out that’s not quite as easy as it sounds. There are a few things like you have to tap the crown a certain number of times to enable certain accessibility features. He couldn’t do it out of the box without any help. We’re getting there. They did eventually get it set up such that he could control this really powerful computing device with nothing but his eyes and voice, which is really powerful. Going even beyond that, we have things like brain-computer interfaces. Now on the screen on the left, we have Cognixion’s Cognixion ONE BCI device, which is a device designed for people with really severe motor disabilities who may not even be able to move their eyes.

Thanks to the BCI system, have some very clever user interface design, have a much-expanded ability to interact with the world around them and communicate their thoughts and desires using this technology. Then, on the right, we have the Meta’s Surface Electromyography wrist controller that’s currently, I think, under research. I had a great talk at the XR Access Symposium by Kati London about this. This is something that reads the electrical signals going through the arm and can even do things like detect a pinch just by sending the nerve signals for a pinch. Even if you don’t have a finger and thumb to pinch with, you can still control a VR and AR device as if you did. They refined this technology in part by working with another person that had limb differences, didn’t have a hand to work with. In doing so, made it significantly more robust for people that do have the normal number and amount of hands and fingers.

Some of the applications here include things like teleoperation. You have now on-screen Tangible Research telerobotics, where a man is manipulating a Rubik’s Cube using these robot arms, he’s controlling remotely, and somebody who really wishes they had teleoperation, a man in a bomb disposal suit. These are things that very much benefit from having more ways to understand what ways you’re communicating with your technology. To be able to have things like brain-computer interfaces, to have things like Surface Electromyography communicating what we want out of our tech, it’s the difference between something like this, where we have a robot that’s just falling over, and one of the newer models of Atlas that can dance a whole jig. There’s so much more happening there.

The one on the left, you’ve got a couple of motions. You’ve got arms back, foot out, kick. It turns out that real life is a little bit more complicated than that, compared to something like this has got way more things going on. As humans, oftentimes when we’re controlling our own body, we don’t even think about this. If I want to start dancing, I can just do it at the speed of thought. Whereas if you’re trying to define things with these narrow bandwidth communication paradigms, you can only do so many things at once. You can only send so many signals at once. When you’re in these dynamic real-world environments, when you’re trying to control things that have to obey the laws of gravity, that’s often not enough. Our understanding that we gain through working with folks like this, working with devices like this, is going to be applied to make our general interactions that much smoother.

3. Multisensory Experiences

The last category I’m going to talk to you is the idea of multisensory experiences. Because as Ohan was saying before, a lot of people think about XR as being a fundamentally visual experience. They develop accordingly. That wouldn’t be such a bad thing, except that that’s also where a lot of people stop developing. They make something that looks really nice, and then they just forget about the rest. That’s a problem for a number of reasons. When you use things like sound, for example, really cleverly, you can vastly change the experience. We have on-screen here a couple of examples. We have Benvision, which I am one of the advisors for, that is trying to use music-based mobility for visually impaired folks. It’s trying to take scenes and sonify them in a way that is pleasing to the ear, so that if you walk into a room and it’s silent, you can have the chairs play a note, and the table make a note, and various other things in the environment begin to sonify themselves, so that you can have this experience and have an understanding of the space around you without having to rely on sight.

There’s another project called SocialSense XR that we actually worked on at Cornell for a while. The idea of it was, if you are blind and in a VR space, there’s a lot of social language, of body language that you don’t get. Things like, how many people are looking at me? Where are the groups in this room? Is everybody spread out equally, or are they in a big crowd? How do you understand that? In real life, there’s lots of subtle cues for low vision people to take advantage of to know that stuff. People aren’t silent. Oftentimes in social VR situations, you’ll have these avatars that are just floating there, not making any noise, like holograms. Even when they’re moving, there will be no footsteps. It’ll just be this eerie silence. Putting in those types of sound and other haptic feedback, creating more multisensory experiences, is going to both make those things more usable for those folks that have visual disabilities, but also just make it more real for the folks that don’t.

The last example I’ll put here from sound is a really fascinating paper called, “Surveyor: Facilitating Discovery Within Video Games for Blind and Low Vision Players”. This is by Vishnu Nair and his team at Columbia University. It just came out, 2024. This one had a really advanced take on helping people understand not just what’s around them, but building up a map of their environment in their head. You see on-screen here, there is a grid that stops at this wall.

If you go around the wall, there’s a key card that’s their objective. Their technology would basically give you different sonifications for ground that you’ve already covered, an area that you haven’t seen yet, and so allow you to more easily hone in and navigate through a strange space like this. I don’t know if any of you, like me, have gotten lost in VR spaces before. These types of studies, these types of applications that, yes, they were made with blind folks in mind, but could still absolutely be really helpful for everybody else, these are abundant if you just know to look for them.

Another really important part of multisensory experiences are haptics. There are some companies that have put out some great things like HaptX, who has sets of gloves that can give you an insane amount of touch feedback. Let you pick something up, really feel like it’s in your hand. That is fantastic for, again, any number of applications. Looking to the academic research, there’s folks that are thinking about how we go even further. I have a paper here called, “Stretch or Vibrate? Rendering Spatial Information of Static and Moving Objects in VR via Haptic Feedback for Blind People”, from Jiasheng Li at the University of Maryland, also came out 2024, where they were experimenting with basically a device you’d stick your hand in.

It could tell you both static and moving objects elsewhere in the VR space by either vibrating or touching down and then stretching the skin out in a given direction. It worked pretty well to give blind folks an idea of, there’s something there, there’s something there, there’s something moving from behind me, circling around me. That is something that, obviously, spatial sound, it can help with, but having an additional way of communicating that through haptics means that you can leave your hearing unblocked for other uses and just tell people about these things via their skin. Because a lot of disabled folks have to take all the things that we get through all our senses and condense those down to one or two or three fewer senses, this type of research is really valuable.

One thing I also want to communicate about this is that, for XR in particular, we need to look beyond the concept of the screen reader. Screen readers are applications that help low vision users understand what’s on a screen. Usually what they’ll do is they’ll take a website or an app, and they’ll go left to right, top to bottom, and just read out what is there. Text can be read out directly. Things like images need something called alt text to have something for the screen reader to read. Obviously, these are an incredibly important part of accessible technology. It’s not enough for XR because it’s one thing to go down a website and say, what’s on this website? The title is this, the first header is this, the second header is this. It’s another thing to be in a 3D space and just start reading out objects like chair, chair, camera, person, chair, laptop. It’s not enough.

There’s a great prototype that Microsoft made not long ago called Scene Weaving that took a very multisensory approach to VR, where they had obviously multisensory feedback, they had embodied understanding, user control over the flow of information, prioritization of urgent data. Things like, you could do a little bit of echolocation and have a ping go out, and from hearing it, understand, there’s a wall right behind me but the space in front of me is more open. You could send out a pulse that would tell you what’s nearby or set it, as you walk, to read out things that approach me. You could have people give notification if they wave at you, it would give you a notification, “Somebody wants to talk to you”. It would prioritize that over more irrelevant parts of the environment. All of these features. That is some of the types of thinking that I think we’re going to need in order to, again, first make fundamentally XR accessible. Secondarily, so many of these features I think would be useful to fleshing out an environment for somebody who does have all their senses anyway.

Some applications of this. Number one, search engine optimization. Going back to that idea that when you make something accessible to people, you’re also making it accessible to machines. We have an image here of an experiment from our good friends at Equal Entry on adding alt text in social VR. For example, there’s a model of a piranha floating around and it has a tag, piranha, so that if you were using a screen reader, it could theoretically tell you, that’s not just a random collection of pixels, that’s a fish. That’s a piranha. If the screen reader can do that, then that means that other types of machines, APIs, AIs can do that as well. If you wanted to have a search engine for VR spaces, that engine would now be able to understand, there’s fish in this scene. It’s not just a collection of pixels. It’s not just arbitrary object labels in Unity. It gains a better understanding of what’s in the scene in the same way that alt text gives website search engines a better understanding of what’s on a webpage.

The difference between a space that hasn’t been set up for this type of multisensory experience and one that has, is the difference between just being this big mess of data that is only really interpretable to a sighted human versus having something that is robust to not only humans with all types of sensory abilities, but to machines and search engines as well. Really, the fundamental thing that we can take away from these more multisensory experiences is immersion. When you are in a place that is using not just your sight, but your sense of sound, your sense of touch, maybe even one day your sense of smell and your sense of taste, that is going to make a much more powerful impression on you. By thinking about how to have enough definition on your sound or your haptics to let somebody who’s fully blind or fully deaf in a user VR space, that’s going to be adding a ton to the experience for able-bodied people that will also benefit from all that information.

Conclusion

In conclusion, why look at curb cuts? Number one, it’s not just the right thing to do. Number two, it’s good business too. Accessibility makes products better for everyone.

Resources

This is the XR Accessibility Project. This is something that we worked with the XR Association on. It’s a one-stop resource of accessible solutions for designers and developers working on XR, right now more focused on design. There’s sadly a big dearth of open-source code for accessibility, but we’ve been doing our part to flesh that out. You can find that at xraccess.org/github. Then, second, I would strongly encourage anyone who wants to talk more about this stuff to reach out to us at XR Access. We have very much an open door where we are trying to help people create accessible things, trying to spotlight on folks that have. Anyone is welcome to contact us. You come to our website. You can email me, [email protected], or I put my LinkedIn on-screen as well, linkedin.com/in/dylanrDylan Fox.

Questions and Answers

Participant 1: I know you showed a use case for autism. Did you find any other use cases for XR for neurodivergent people, particularly with ADHD? I’m thinking of like Adam Gazzelli’s lab, particularly this is a couple of years ago that he had developed games that had helped with that. Since then, and in the pandemic, I found most of my friends in XR, they’re women and developers and designers, all have ADHD, but it’s not something that we have discussed and I don’t think is focused on quite a lot. I’m curious to know what other interesting use cases you might have seen?

Dylan Fox: I think there’s definitely a number of simulators where people who are neurodiverse can train to, again, be in social situations. When it comes to best practices for neurodiverse people, there’s things around being able to reduce the amount of information on-screen to just the essential elements, controlling flashing lights or animations or things that otherwise might be distracting. A lot of things that just generally come out to good UX design principles anyway. I know also for training, there’s some things on the flip side where it’s training for managers to not discriminate against people who are neurodiverse, which is I think something that is severely underrated, because getting people to mask through a job interview is not as valuable as hiring folks that are neurodiverse that can contribute a lot. Even if they don’t necessarily feel comfortable making extended eye contact and things like that. Yes, definitely a fair number of people studying the neurodiverse side of XR.

Participant 2: I think it’s very easy to have a blind spot in terms of certain accessibility traits, if you’re like abled, if you’re just not exposed to certain things. Is there a way to bake that into your thinking when you’re thinking about products? Or, historically, has it been better to address these issues as they occur?

Dylan Fox: I think, sadly, we do live in a society, and specifically in tech, in a sector, where people by default will design for folks like themselves because that’s the path of least resistance. I think the number one things for avoiding blind spots, the biggest one is to have disabled people on your team. If they are on your team, ideally, if they are your boss, you can guarantee that they will pick up on some of these issues. Also, we don’t just want people with disabilities to be consuming content, we want them to be creating content as well. That is a huge one.

Failing that, certainly making sure that you do user testing with disabled folks, making sure that you have, in your unit tests, for example, things like, can I play my game one-handed? Can I play the game with no volume? Even if you’re not disabled yourself, you can simulate it, not to try to understand somebody’s lived experience, because you can’t do that in 10 minutes of testing, but certainly to understand the technical aspects of, I didn’t realize that without a sound cue, it’s impossible to know something is coming. Maybe I’ll add a visual cue as well that gets you to turn around and look at the important thing. Doing things like that, making sure that accessibility is part of your process at every stage is going to be really important to making sure your game comes out of the gate accessible and doesn’t have to go through a million really expensive retrofits down the line to reach those standards.

I’ll make a quick shout out to one other thing that we do have going on in the industry at the moment. Ohan talked about the lack of accessibility standards when it comes to XR. That is something we are trying to fix. We’ve started up a working group with the Metaverse Standards Forum called the Accessibility in the Metaverse Working Group, where we’re trying to bring folks together to understand what standards ought to be, and then write up a set of guidelines that would hopefully be adopted by groups like the W3C. Such that, say you’re in education, you have federal mandates that what you buy must be accessible, there needs to be something you can go down and check off and say, “I’ve hit all of these benchmarks, it must be accessible”, if we ever want enterprise XR to take off. If that’s something that you’re interested in being a part of or getting updates on, definitely reach out. I can connect you.

 

See more presentations with transcripts

 

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article China accuses Nvidia of violating the country’s antitrust laws
Next Article Save 33% on one of the best robot vacuums we’ve ever reviewed
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

8 Games That Are Perfect for Social Butterflies
News
Google’s new Windows desktop app brings a Spotlight-like search bar to PC
News
China’s EV startups reportedly set ambitious sales targets for 2025 · TechNode
Computing
Culinary Pros Name the 4 Most Common Cooking Mistakes That New Chefs Make
News

You Might also Like

News

8 Games That Are Perfect for Social Butterflies

14 Min Read
News

Google’s new Windows desktop app brings a Spotlight-like search bar to PC

2 Min Read
News

Culinary Pros Name the 4 Most Common Cooking Mistakes That New Chefs Make

13 Min Read
News

Smartphone maker Nothing raises $200m – UKTN

2 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?