Opening Ceremony
Jan Withers, TDI President
Transcript: Opening Ceremony
[Start transcript
Visual description: TDI president Jan Withers wearing blue shirt and glasses with short gray hair standing in front of a solid blue-green wall.
Hello everyone! I am Jan Withers, president of the TDI Board of Directors and I represent the southeast region of the United States on the board. Now for a brief visual description…I am a white middle-age female with short gray hair wearing blue glasses, a blue shirt and simple jewelry, standing against a solid dark blue-green background.
Welcome to TDI’s 24th Biennial Conference. Today is July 26, 2021. Today is a very special day as it is the 31st anniversary of the passage of the Americans with Disabilities Act. You all know it as the “ADA.” This landmark Act paved the way for increased accessibility in many areas of the lives of people who are Deaf, Hard of Hearing, Late-Deafened, DeafBlind, Deaf with other disabilities, and who have other communication disabilities. All the groups I just mentioned will hereafter be referred to as deaf and hard of hearing. The ADA also led to the development of additional legislation such as the 21st Century Communication and Video Accessibility Act of 2010.
Another reason today is a special one is this is the first time we have a fully-virtual conference. This conference is a mix of pre-recorded and live presentations and panel discussions. People who have registered for this conference will also be able to view all or portions of the conference at a later time at their convenience.
However, what is not new about this conference is that once again, we have a truly terrific array of presenters, panel participants and topics. The topics are timely and relevant and our presenters and panel participants are extremely knowledgeable in their respective areas of expertise.
In addition to the fact this conference is entirely virtual, what makes it unique are the following two things: First is the fact that in the past 16-17 months, our world was upended by the COVID-19 pandemic; the pandemic forced us to live and work virtually; second is the heightened focus on diversity, equity and inclusion on all aspects of our society, including information and communications technology.
The pandemic exacerbated and shone a spotlight on the barriers so many of us experience in information and communications technology. But it also revealed to us extraordinary opportunities to advance accessibility. At the same time, the emphasis on diversity, equity and inclusion made it clear that not all communities experience and/or benefit from information and communications technology equally.
What I just mentioned is why TDI’s vision is this: All individuals and communities experience the world of Information and Communications Technology with the same ease, access, and inclusion, resulting in full and equitable participation in society. Notice the word “all.” Think about this: if technology could truly benefit all deaf and hard of hearing people, then we can be sure everyone else benefits. A good example is texting: Deaf and Hard of Hearing people were the first to embrace it but became widely popular.
However, I want to make clear that I am not talking about just the disparities between deaf/hard of hearing people and hearing people. I am also talking about the disparities that exist within the deaf/hard of hearing community; we must always keep in mind there are many different kinds of communities and individuals within the deaf and hard of hearing world.
Yes, we have experienced enormous gains in the past 30 years but we cannot afford to be complacent. The pandemic has made that clear. Technology is evolving so rapidly and often in such a fragmented way. Sometimes I feel like we are playing the game Whack a Mole. We have to be alert and not only respond quickly but to also anticipate. That is why it is so important for all of us – consumers, industry, government, and academia – to work together and to nurture and bring in more partners in our collective advocacy efforts.
Our theme for this year’s conference is “Reset and Reconnect.” I think it’s an apt theme for this year – the fact the pandemic has truly upended our world and the need for us to be inclusive of all kinds of communities and individuals make it clear we need to reset and find ways to reconnect with people. Some say TDI is about technology. I say – wait a minute…not so fast…TDI really is about people – it is about our very fundamental need to communicate and connect with each other. Technology is our tool to communicate and connect. So, our work is to make sure technology serves us so that we all are full and equal participants in our society. I believe you will agree the conference program reflects the theme of “reset and reconnect.”
Before I close, I want to thank all our sponsors for their generous contributions. Without their support, we would not be able to have this conference and to do our important advocacy and educational work. Please visit our website to learn who all these wonderful sponsors are. In particular, I want to thank Ultratec for sponsoring the opening of this conference.
Finally, thank you all for your participation in TDI’s 31st biennial conference – I hope you come away feeling enlightened and inspired, that you’ve hit the reset button and feel reconnected.
Now it is my great pleasure and privilege to introduce you to our first keynote speaker, Jessica Rosenworcel, who is the chair of the Federal Communications Commission.
End transcript]
Keynote Address
Jessica Rosenworcel, FCC Acting Chairwoman
Transcript: Keynote Address
(Speaker description: White middle aged woman with shoulder length dark hair, wearing black jacket and light blue v-neck shirt underneath her jacket.)
(Signs and speaks) Hello, my name is (stops signing) Jessica Rosenworcel. I am definitely not a professional signer, but I am a big fan of TDI, and it is an honor to be back at your biennial conference.
Obviously, this year’s virtual conference is a lot different than 2019 when I joined you on the campus of Gallaudet. But no matter the format, it is always great to be with TDI. That’s because no voice is more trusted when it comes to making communications technology more accessible for the millions of Americans living with disabilities.
During that 2019 event, I opened my remarks by acknowledging the recent retirement of Claude Stout – TDI’s legendary long-time leader. I talked about how TDI famously submitted over 1,200 filings with the Commission under his leadership. But I’m pleased to report that I am not Claude Stout’s only admirer at the FCC. As many of you know, each year, the Commission gives out something we call the Chairman’s Awards for Advancement and Accessibility. Typically, we present these awards to people who have developed new accessibility technologies.
But considering that 2020 marked the 30th anniversary of the Americans with Disabilities Act (ADA) and the 10th anniversary of the 21st Century Communications and Video Accessibility Act (CVAA), we thought it would be appropriate to use these awards to recognize and acknowledge some giants in the field of promoting accessibility. One of the three honorees that evening was Claude Stout.
The FCC was so proud to honor him that night and I’m pleased to see his tradition of excellence and advocacy lives on.
Now I was appointed acting chairwoman of the FCC on January 21st. And just hours before my swearing-in, TDI submitted comments on the Commission’s COVID-19 Telehealth program.
Not only that, Eric Kaika recruited more than a dozen additional groups as co-signers. So while the leadership at our respective organizations may have changed, it is already clear that TDI continues to foster collaboration and offer expert advice to make FCC policy better and smarter.
So I have a deep appreciation for TDI, not just because of your expertise, but also because i have a long history of working with you on the issues you care about.
You see before I came to the Commission, I served as legal counsel to the Senate Commerce Committee. And when I was there, I worked on a range of technology and communications issues. And I’m so proud to say that one of the highlights of my tenure is working on the 21st Century Communications and Video Accessibility Act.
I was on the team that helped draft the bill and help shepherd its passage in congress and then had the privilege of watching the President sign it into law. And back in the day when we used to work in person, a signed copy of the 21st Century Communications and Video Accessibility Act was on the wall right outside the entry to my office.
I wanted people to see it, because I wanted to remind people that we can in fact do big things. This law did just that. It took the concept of functional equivalency from the Americans with Disabilities Act and updated it for communications in the digital age.
Now, functional equivalency may sound like the kind of regulatory lingo that only a lawyer could love. But for millions of Americans with hearing and speech impairments, this means that they have the right and ability to pick up the phone, reach out, connect, and participate more fully in the world.
This law is terrific. But I also know we can’t rest on our laurels. Because accessibility and functional equivalency cannot be afterthoughts. We need to continue to give meaning to these principles in the law in everything we do. And that includes, of course, the work of the agency-especially as we reach toward what I surely hope will be the end of this pandemic.
That’s because the events of the past year and a half have changed our relationship to technology.
You know, broadband is no longer nice-to-have. It is need-to-have for everyone everywhere. And to help get all Americans connected and stay connected during the pandemic, including those with disabilities.
The FCC has launched the Emergency Broadband Benefit Program. Eligible households can receive discounts of up to $50 a month for broadband service, and up to $75 a month on tribal lands. Participants can also receive a one-time discount of up to a $100 on a computer or tablet.
If your family qualifies for Medicaid, SNAP, free-and-reduced school lunches, or other aid programs the odds are you are eligible to participate. Households that lost significant income in 2020 may also qualify. The response to this program has been phenomenal. More than a million Americans enrolled in the first week. Altogether, over 3 million households have signed up since this program went live in May. To sign up or learn more, check with your local broadband providers to see if they are participating, or you can go to FCC.gov/broadbandbenefit to find a participating broadband provider near you.
But that’s not all we’ve done. We’ve also set up the Emergency Connectivity Fund to help schools and libraries get people connected where they live. Through this program, we are in the process of investing $7.17 billion to help get laptops and tablets into the hands of people who lack them and then connect these individuals to fixed or mobile broadband service at home.
In drafting our rules for this program, the Commission said that if people with disabilities require certain devices to connect to the internet, schools and libraries are expected to accommodate those needs. We also included a provision that says applicants with disabilities may request a waiver of the support limits for laptops and tablets-so they get the services and devices under the program that work for them.
The Commission is also in the process of investing $250 million to help more healthcare providers deliver more connected care. The value of telehealth has really become clear in this pandemic. We want to make sure that this technology makes healthcare access more equitable, rather than worsening health disparities. And that is why we have made applicants and participants for all of the Commission’s telehealth programs aware of their responsibilities under accessibility laws.
The FCC’s Disability Advisory Committee has also been exploring issues relating to accessibility gaps that have become apparent during the pandemic, and accessible telehealth has been mentioned as an important issue that may need further discussion. So we’re not only working to make sure people have internet access as we try to get beyond the pandemic, we’re also working to ensure the availability of telecommunications relay services or TRS.
TRS, of course, is the communication service that allows people with hearing or speech disabilities to place and receive basic phone calls. And as you know, there are a lot of different kinds of TRS now available in the marketplace. During the pandemic, the Commission took note it wasn’t always easy for TRS providers to fully staff their in-person call centers. So last year, the agency granted TRS providers emergency waivers of certain staffing rules. And in February of this year, I directed the agency to extend these waivers. We’re going to keep on monitoring the situation to ensure that these waivers will continue to be available as long as they are necessary to keep the service available and functionally equivalent. So the users of these services are always front of mind. The Commission also continues to work on a number of issues and challenges that preceded the pandemic. This February the Commission updated our wireless hearing aid compatibility requirements to ensure that our rules reflect the latest technical developments and standards and recommitted to making 100% of wireless handsets hearing aid compatible.
This April, the Commission sought comment on whether updates are needed to our rules for captioning on television in light of so much more watching in so many more ways during the pandemic. The comment period on this just closed on July 6, and we are reviewing submissions and weighing next steps.
We’ve also been looking at our rules specifically for caption telephone service, or IP-CTS, which lets you simultaneously listen to another party and read the captions of what that party in a telephone conversation is saying.
We are currently reviewing comments on the FCC’s proposal to adopt measurable standards and metrics for captioning delay and accuracy for IP-CTS. This is a matter that I consider really important and I know you do too.
The Commission is also thinking about accessibility issues and proceedings where it may not be obvious. At least not at first blush.
This May, the Commission updated our rules for inmate calling services. As part of these reforms, the FCC must coordinate with the Department of Justice to ensure that incarcerated people with disabilities at federal prisons have functionally equivalent access to telecommunications.
That makes a really big difference for their families who want to keep in touch. But it makes a big difference for all of us because we know that contact with kin reduces recidivism. And we are going to keep this effort going because we are seeking comment more broadly on the provision of communication services to incarcerated people with hearing and speech disabilities.
In order to identify further improvements we can make. We want to engage with TDI on all of these issues. And if you believe there are other areas under our authority that are of concern I encourage you to let us know.
Of course, knowing TDI, I know that won’t be a problem. You know how to make your opinion heard. And I’m glad you do. Because when we approve when we improve access to communications for millions of individuals with disabilities, we strengthen our economy, our civic life, and our nation.
So let’s get to work–and do it together. Thank you, and have a great conference!
FCC Town Hall
Diane Brstein, Suzy Rosen Singleton, Eliot Geenwald, Will Schell, and Mark Seeger
Transcript
>> MARK SEEGER: Good afternoon. I believe we’re ready to go to have a conversation with the FCC. I’d like to introduce myself. My name is Mark Seeger and this is my sign name. I represent the central region of the United States on the board for TDI. And I am an outgoing secretary on the board. So let me describe myself. I’m wearing a blue sports coat. I have gray hair. It’s fairly short and spikey. I’m wearing glasses with blue rims and a dark black button-down shirt. I’m a middle-aged person. I’d like to let you know on the screen we have several videos feeds on Zoom with our panelists with the FCC. We are going to try to turn off our videos while we’re not speaking, but during the panel discussion, we will have our videos on and open so everyone can see the panelists. That is during the question and answer session and the discussion. And then we will turn our videos on and off to make them more accessible. I’d like to welcome you all to the town hall with Disability Rights Office at the FCC.
So I’m excited to begin our TDI conference. Together we recently got a welcome from our President Jan Withers and a wonderful opening keynote address from the chairperson from Rosen Orsel, who addressed the FCC on current and future work at the FCC in the areas of telecommunication, programs, and the internet. Some of our distinguished guests today for this afternoon I would like to introduce with their titles and they can tell you about themselves and what their role is after I introduce their titles.
So first, I’d like to welcome Diane Burstein who’s the deputy chief for Consumer and Government Affairs Bureau.
Second, on the panel, we have Suzy Singleton. She’s the Chief of the Disability Rights Office.
And Eliot Greenwald, the Deputy Chief of the Disability Rights Office.
And, fourth, Will Schell, who is the Deputy Chief also for the Disability Rights Office. Each of them will talk about their roles and their current goings-on with the FCC. Diane and I will turn off our videos. Diane and I will stay on video — excuse me. Interpreter error — and I will turn off my video. I will turn it over to Diane. Please take it over.
>> DIANE BURSTEIN: Hi. Thank you very much, Mark, and thank you to TDI for having me here. As Mark said, I am Diane Burstein and I am the Deputy Bureau Chief for the Government and Consumers Affairs Bureau. The Consumer and Government Affairs Bureau has policies and it is one of the seven bureaus at the Federal Communications Commission. As deputy bureau chief, I help to oversee the very active Disability Rights Office, and DRO as its called shorthand is comprised of many dedicated and talented individuals and I am glad we have several of them on the panel this afternoon.
A little bit about me. I joined the FCC a little more than two years ago. Much of it has been remote which has been interesting, to say the least. But before I came to the FCC, I worked as an attorney in-house at a trade association for communications. And in that role, I worked closely with TDI on a number of video accessibility issues. So I’m very happy to be able to be here today to participate in this afternoon’s panel. Thank you very much. Mark, I’ll turn it back to you.
>> MARK SEEGER: Thank you, Diane. And with that, I would like to ask I question. We have some new attendees this year. Please share an overview of the FCC’s accessibility work and highlights, currently active proceedings as an example of the work the FCC does. Do you mind turning on your video again, Diane?
>> DIANE BURSTEIN: I’m happy to talk about that. FCC and the Disability Rights Office covered disability issues in three main areas. Modern communications, video programming, and emergency communications and undertakes a range of different stakeholder initiatives as well. We address disability rights matters including access to advanced communication services and equipment, access to internet browsers built into mobile phones, telecommunications relay services, the national deaf-blind equipment distribution program, accessible video programming and video programming apparatus such as closed captioning on television programming and certain video programming online, audio description and accessible user interfaces text menus and guides. And I know we’ll talk more about some of the specific topics later on in the panel.
DRO also acts as a housing expert to assisting other bureaus within the agency. As I mentioned, CGB is just one of 7 bureaus and there are other bureaus such as the media bureau, public safety and force bureau, and a whole range of different bureaus at the FCC that the Disability Rights Office coordinates with on important issues that come to the commissioners’ attention. We also assist consumers, industry, and others with issues relevant to disabilities. And we also oversee a federal advisory committee called the disability advisory committee, the DAC, which is comprised of numerous stakeholders interested in disability matters including TDI.
To your question about the FCC’s current work, obviously, there are a number of issues that are going on the chairwoman spoke to many of them. I was going to highlight just one where we’re currently working into whether any updates are needed to the commissions’ rules implementing the 21st-century communications and video accessibility act. The CVAA. Many of the disability-related matters that I just mentioned, um, that the FCC regulates arose from the CVAA, which is acting insure Rosen adopted more than 10 years ago to help quote ensure individuals with disabilities are able to fully utilize communication services and equipment and better access video programming. End quote. Given changes in the technology and industry practices that have taken place over the last decade-plus as well as taking into account consumer experiences with the rules as they exist today, the FCC issued a public notice several months ago inviting stakeholders to provide input on areas that are working well where improvements might be made and requirements that may no longer be needed to serve their intended purpose or have been overtaken by new technology. So, ah, as was mentioned, the period for filing comments on this public notice just ended this month earlier this month and we’re reviewing the filings that were made including those submitted by the accessibility advocacy and research organizations of which TDI is a part. So we appreciate the thoughtful comments that have been filed and we look forward to additional conversations about the concerns that have been raised. So that’s just one of the issues that we’re dealing with right now that obviously covers quite a bit within that one public notice. Thanks, Mark.
>> MARK SEEGER: Thank you, Diane. So now I would like to turn it over to Suzy Singleton. She will elaborate on her role as the chief DRO, the chief of DRO and will update us on current activities at the finished CC. Take it A– FCC. Take it away, Suzy.
>> SUZY ROSEN SINGLETON: Yes. Thank you, Mark. It really is great to be with all of you. As you know resetting and reconnecting given that we are now going through the pandemic, really is so critical and important that we continue to connect with one another even though we may have to connect in different ways. So I don’t think you can rephrase that better as Mark said. As Mark said, I am Suzy Rosen Singleton, chief of the Disability Rights Office and the consumer of the governmental affairs bureau. We’re focused on accessibility as Diane mentioned, folk using video programming, modern communications, and advanced and emergency communications. We do work with other bureaus to ensure that we are all connected with everything and that disability is not an afterthought. That is part of the mission of the FCC to make sure accessibility is a priority for our work.
So we are very excited moving forward. With that, Mark, you mentioned that you wanted me to answer what’s new or were you going to go ahead and —
>> Interpreter: Yes. I was on mute. The interpreter was on mute just so you know. Forgive us.
>> MARK SEEGER: So, Suzy, in light of the current unprecedented pandemic, many people are forced to stay home and rely more and more on telecommunications and connectivity to thrive. What is the FCC doing? Can you respond to that?
>> SUZY ROSEN SINGLETON: Yes, thank you, Mark. Eliot and I will respond together because he is involved with TRS, the telecommunications relay services and today is the 88th anniversary. The work we are doing around TRS, thanks to title 4 of the ADA is quite amazing. Back to Mark’s question. Now that we are in the pandemic, what kind of emergency programs have been established in services? And initiatives, what are we doing to try to address the challenges that everyone is facing when they cannot go to work in person when they can’t go to school, they can’t go to a doctor, unlet it is an emergency, of course. Now everything is tele-everything. So what have we been doing? You heard a little bit already from the chairwoman. Chairwoman Rosen Warwell and I will explain more about the emergency initiatives. The first is the broadband initiative. We have encouraged everyone to stay connected. So we have implemented a 3.2 billion dollar initiative to allow people to have a discount for broadband services for their bills, for eligible participate ants. Please know those funds are very limited. If you feel you are eligible, please look at WWW.FCC.gov/broadbandbenefit. It is all one-word broadband benefit. The second thing is the emergency connectivity fund. This is to help with 7.17 billion dollars being distributed to schools and libraries to support students, school staff, and other, library patrons to be connected to the school through the library system. For broadband connectivity, for equipment, laptops, Wi-Fi hotspots. So please do reach out to your school library to see about the possibility of a fund for disability accommodations as well. Third, she mentioned TeleHealth. We have about $650 million that we spent to help health care providers over time to help them provide services to you in a remote fashion so you can be safe at home without having to expose yourself or others to going in for medical services. So we do want to remind those health care providers that you — that they have ADA obligations to ensure that you have effective communication. We have also been working with HHS and DOJ to coordinate and make sure we have the best possible accommodation services for patients.
I do want to add two more areas that were not really mentioned so far. One is emergency notifications. We have recognized the importance of making sure that televised information is accessible. So we have been working with captioning vendors to label the mad priority workers and make sure they have that designation so they continue to provide captioning services for televised broadcast information about emergencies and so forth. We have been working closely with consumers about accessibility concerns to make sure that you are getting information that you need from the television stations that are broadcasting emergency information.
There is a nationwide test. I want to make sure you are alerted through your wireless phones and alert through the television as well. Those are the two ways that you can keep on the lookout for emergency information. I will talk about those a bit more later. We will have a test call on August 11th at 2:20. We hope to hear from all of you. Your observations about whether or not you are able to access that information. Was the crawl readable? Was the font and the color contrast — was it easy to read or not? We also have a force the message that pops up on the screen. Were you able to read that? Was it understandable? That’s the kind of information we want to hear from you. Same as with wireless emergency alerts. We hope to hear from you on those as well. We have a complaints form established for that purpose at FCC.gov/accessibilitycomplaintsform. Or DRO@accessibility.gov. Some of you have asked about tablets. So they’re asking — people have been asking if you can get it on tablets. The WIA process is voluntary. Your wireless carrier must opt-in and your device must be WIA capable and you must have a phone plan. Those are three important things to take a look at and evaluate if you would be able to receive that message on August 11th at 2:20 eastern standard time.
One last thing about notifications as far as the internet. Many of us are on the internet for streaming information and so forth. So congress did ask the FCC to take a look at and see go it is feasible for — anyway, to see if it is feasible for consumers to watch on the internet. Is it technologically possible? That act requires us the FCC to submit a report to Congress by September 28th, 2021 and we did go ahead and release a request for comment last March and thank you TDI and many consumer Oh,s did submit comments to highlight the importance of the internet is another tool for emergency notifications.
The last area I wanted to cover before I turn it over to Eliot is emergency communications and how are we — what are we doing in that space to ensure that you are connected during the pandemic. So here I would invite Eliot to join us to see. Welcome, Eliot. While you’re here, please introduce yourself. I know that Mark — I don’t know if Mark wanted to turn his video back on to is ask you a question or not.
>> MARK SEEGER: Hi, Eliot. I’d like to add to this question. Suzy added some information for her answer, but thirdly accessibility via communication can be very important in emergency situations. Recently, we had a huge weather storm in Texas in February. There were a lot of requirements for emergency accessibility and to be in touch with the emergency systems. So I’m curious. How is the FCC ensuring that we do have effective communication to emergency services? Can you answer that question?
>> ELIOT GREENWALD: Yes. This is Eliot. Thank you, Mark. I think we got a little bit ahead of ourselves because I was going to tag on to the last question you asked, Suzy and then I will turn it back over to Suzy and I will tag on to that question as well when Suzy is done. Before I do that, I do want to give a couple of shoutouts. One is, of course, to the Americans with Disabilities Act as being the anniversary of the ADA. And the important thing to note here is I’m going to talk a little bit about telecommunications relay services and we owe the whole TDI program to the ADA because it’s the title 4 of the ADA that gave where congress gave the FCC the authority to, um, you know, to establish the telecommunications relay program and a mechanism for funding the program. So that is all quite, you know, quite important. It’s an important part of what we do in the Disability Rights Office.
I was going to mention — so getting — and, of course, one other shoutout I needed to give this to. Claude Stout for his fantastic leadership. I will join the chairwoman for giving that shoutout for that fantastic leadership for so many years when he was at the helm of TDI and, ah, and KAIKA is continuing in Claude’s tradition there. So now getting back to the question. One thing the FCC did pretty early on in the pandemic — in fact, it was within days of when the national emergency was established regarding the pandemic. So it was in the middle of March when the FCC started issuing a series of waivers of the rules so that the providers, the TRS providers could provide continue to provide service without interruption. And this was quite important because there was, of course, the pandemic and the need for social distancing, and most services were provided through call centers where social distancing was — would be difficult during the pandemic because of the number of seats and how close the CAs were pre-pandemic in the call centers. So the providers, you know, we’re willing and able and ready and did pivot to providing having their communications assistants working at home and the FCC facilitated that through a series of different waivers and that included a number for VRS, there were a number of restrictions on CAs working at home and the FCC waived a number of those restrictions so that the CAs could work at home, you know, from the get during the pandemic which not only saved a lot of lives, but from the CAs, but also allowed the level of service to continue that the consumers were used to and expecting. Of course, the ref of service had — the level of service had to increase bah with social distancing, people staying at home, that increased the demand for relay services. It was also initially a dough crease in the ability to provide is the service due to some CAs being unable to work due to child care and other responsibilities they had early on in the pandemic. So as a result of that, we waived the speed of answer requirements for the non-VRS services. There was enough flexibility in the speed of answer rules for VRS that we didn’t have to waive that. But we did have to waive it for all the other services as the chairwoman mentioned. As I mentioned, we waived a number of restrictions on CAs working at home in VRS services.
And another thing is a number of people who were overseas, stranded overseas needed to make VRS calls to the U.S. And so we waived the 4-week limit on being able to make calls from overseas to the VRS for all users who were — registered VRS users who were overseas. We also waived for those who were already overassessed, we waived the requirement to register the call from overseas in advance for those that were already overseas. And that — that helped a lot with the providers being able to step up and provide the service and I do want to give a shoutout to all providers who went to extraordinary efforts to make sure service was provided throughout the pandemic and that consumers were able to receive that service. So thank you providers for that. Now I’ll turn the floor back to Mark who will ask the question he initially asked me, I suppose, then Suzy and I will both respond to that.
>> SUZY ROSEN SINGLETON: And this is — hi. Sorry. Okay. Thank you. Thank you, yes. We can go ahead and rephrase that question that you asked. The first question was about what pandemic measures were taken and now over to ask a bit more about what kind of options we have to contact 911 and communications in general. It really is a bit more general. What you tend to do is what you should be doing using voice, text, relay, and so forth. But we do want to emphasize that the FCC is focused specifically on phone companies or carriers like AT&T, Verizon, T-Mobile. We’re not focused on 911 call centers or even 988 which is the national suicide prevention lifeline. 911 and 988 are those two that we do not regulate. That is not within our purview. That’s under another federal agency.
So with that in mind, what do we require specifically for calls to 911? For texting, a lot of people do rely upon texting. We require if the 911 call center is ready and enabled, carriers deliver that message to the 911 call center; however, if the 911 call center is not ready, the carrier must send the caller a bounce-back message notifying them that the 911 call center is not able to accept text messages and they would need to reach out via method. So it’s very difficult to know we do have a registry on our website where you can go to the Disability Rights Office, which is our office’s website at FCC.gov/accessibility. You will see four texts to 911. You’re able to click on that message on the side and then you can find out if your state and your area have text to 911 or not. Now I am excited to share that registry was just expanded to include RTT, which is real-time text and I know that some of us don’t know what RTT is, but it really is a new TTY. TTY has been around fur a very long time. You saw that video the at the beginning which talked about how TDI was established in 1968 and TTY shin shortly afterward. IT networks really do have a modern way of communication. So TTYs are more an analog system. Whereas RTT isn’t for use on RP networks. So we have already required nationwide carriers to make that available to you all. If you look at your Smartphones, if you have AT&T, Verizon, or T-Mobile, it is very likely you already have that there. It is just turning it on. California did pass a law that requires 911 call centers to handle real-time text emergency calls by January of 2021, which has already passed. Not sure if it’s happened yet or not. However, state and local governments are trying to transition over to make their communication services accessible to you all. In fact, RTT has really saved the day for me. In June, I experienced an emergency notice had to call an ambulance for someone. And I was not at home. I was at a different location and I used VRS, which failed to reach my local PSAP because I was not at home. The registered address pointed them towards my home location while I was at a different location. So I went ahead and used RTT and my voice line because you’re able to use RTT over the voice line like you would with TTY. So I was able to make that call, reach the local PSAP, which is the 911 call center and they had a TTY there. So I was able to communicate with them about the emergency and the need for an ambulance. They sent an ambulance within a few minutes after that. So please do consider turning on real text, RTT, just to know it is available. It is a way of communication where you can communicate directly with PSAPs. All is well now with that emergency, but I thought it was good to share that story to show an example of sometimes we need to be ready with different options to communicate with 911 given that we are deaf, we can’t reach them directly with a voice for instance.
One more thing I wanted to share about emergencies. We do have a recent rulemaking in place about the national suicide prevention lifeline. It does direct that there must be a 3-digit number to make it easy for people to remember. It is 988. Starting July of 2022, it will be that number. Consumer groups including TDI are, of course, interested in deaf people. Deaf people would not be able to call 988 directly. They wanted to see if it was possible to have text directly. So we did release a further notice of proposed rulemaking, which will close on August 10th about that proceeding. We do invite your comments. Do you think that text to 988 is valuable? Should it include real-time text and when should it start? So thank you to some of you who have already submitted comments. We do welcome further comments. It is in the docket number 1-8-366. I will type that in the chat as well. But you can also reach out directly to us at DRO@FCC.gov. We’re happy to provide that information. We also have an ASL phone line as well. I would like to now invite Eliot to come back and provide more comments about how to reach 911 through relay services. Thank you, Eliot.
>> ELIOT GREENWALD: Thank you, Suzy. Basic relay services are required. They’re all required to deliver the 911 call to the local public safety answering which we call the PSAP. Now the analog services for the analog services, that is all delivered through the underlying telecommunications service provider. Then for the — however for the –, that would be for the TTY-based services speech and analog caption telephone services. But for the — now with internet protocol caption telephone service, if the user is — for most forges of internet protocol caption telephone service, the user is already a subscriber to a telephone or voice over internet service and that provider is responsible for delivering a call to the appropriate PSAP. But with some forms of internet protocols caption telephone IPCTS where the IPC TS provider is also serving as well as for video relay service and IP relay service, the — at this time, the providers are supposed to provide the service — provide service as Suzy mentioned with her emergency to the registered location. That’s the location that you give the provide — the relay provider when you register for service and that becomes the registered location. However, the commission adopted a change to that requirement and those rules become effective not too long from now. Less than half a year from now on January 6th, 2022 and so that’s when the providers have to comply with that rule and at that point, the requirement shifts from registered location to dispatchable location if it’s technically feasible. And only use the registered location go the provider — if it is not technically feasible for the provider to find the patchable location. But the dispatchable location is supposed to be where the person actually is making a call. So in Suzy’s situation, because she was not even in her home state when this emergency happened, the call went to her local registered PSAP and not to the PSAP where she actually was making a call on her — so — and so for example, the provider can very often if you’re on a Smartphone or any mobile phone can take advantage of the location services on that Smartphone. So if you get the message when you’re making a 911 call and you are not making it under the Native telephone part of the provider, we’ll ask you if you want to turn it in on-location services. Be sure to turn them on and even better yet, you know, if you’re about to make an emergency call, make sure your location is on a mobile phone, make sure your location services are turned on. They automatically turn on if you are using the native phone function. There is the phone number that comes with the phone, but most relay services are over the top and therefore are not using the Native phone function and that’s why location services need to be turned on when making that 911 call. And the same thing. If you’re using a tablet or a — or a laptop or other computer, there are ways to locate somebody’s internet information, network information when you’re using the internet. So that’s how you get located with dispatchable locations. So that’s if it is technically feasible for the provider.
So basically, um, hopefully, it will solve the kind of problem that Suzy faced. Suzy thought quickly and figured out a workaround another way to get to 911 that would work for her. So hopefully starting January 6th or sooner if providers implement dispatchable locations sooner, that problem hopefully will not reoccur. In terms of 988, they’re — under the rules, they’re required to deliver 988 calls for those emergencies to 988 centers. And, of course, that’s not location-specific. They go to call centers that basically act as the first line of assistance during a mental health emergency. And that’s what 988 is for. And very often, you know, it’s part of that first line service the 988 centers made and somebody finds a local service they might need subsequently or maybe immediately that they can help that person connect with. Anyway, thank you. I’ll now turn the floor back to Mark.
>> MARK SEEGER: Thank you, thank you very much both Suzy and Eliot. For your answers in regards to emergency access, so it’s really valuable information for our audience today to learn what the FCC is doing in that area. And now I’d like to shift to Will. He is responsible for filing complaints at the FCC or collecting information if consumers want more information about specific issues. Will, can you elaborate on your role and then follow that with answering this specific question. How does consumers file comments and seek assistance, you know, from the FCC?
>> WILL SCHELL: Absolutely. Yeah. Thanks very much, Mark. My name is Will. I’m overseeing the complaints that come into the FCC regarding disability-related issues. And, um, there’s really — I just want to cover just what you asked for, Mark, I wanted to cover how to file a complaint and then also how you can get information. So maybe the — maybe the issue that you are experiencing is not quite a complaint ready, but you want to find out more information. We can help with all of those things. So let’s start with complaints. So first of all, the Disability Rights Office has two types of complaints that can be filed. One type of complaint is kind of a standard complaint. This would include issues like closed captions or relay services. All of the issues that Suzy and Eliot and Diane have been talking about, can all be filed with us if have a complaint regarding disability. They help process information. But the standard complaint when we get there, we will send that information to the covered entity and then the covered entity has to respond within 30 days. It’s important that when filing a complaint that you provide as many details as you can. In fact, there are rules that require a minimum amount of information. So let me give you an example. Sometimes we get complaints that say captions are bad on TV tonight. Well, we actually need to have specific information in order to send the complaint to the right covered entity and to give them an opportunity to figure out what’s wrong and how to fix it. So at a minimum, we need to know what station you’re watching, what program you’re watching, what time you’re watching, and what exactly is going on with the captions. Are they garbled or are they missing or falling off the side of the screen. And then also we need to know whether you’re watching on the internet like on an app or something or over the air broadcasts or whether you’re watching on cable services. So however you file those complaints, we may ask some follow-up questions just to make sure that we can send this to the right entity. Now, the second type of complaint that we can receive is regarding telephones and telephone services. And this type of complaint is very interesting. It’s called a request for dispute assistance or we abbreviate that through RDA. So if you file this type of complaint, the FCC will assist in the communication between you and the covered entity and we will do this for 30 days and try to come up and work with the two parties to try to find a satisfactory resolution that’s satisfactory between and you the covered entity. If a resolution cannot be reached between the two groups, then you would have an opportunity to file a complaint with the enforcement bureau and they in turn would have — they would have to make a determination about if a violation has occurred. We have a very high track record of helping the two parties come to a resolution. So it’s — it’s been a very positive complaint process. So anyone can file a complaint by going to FCC.gov/complaints. And you can also call or e-mail us and we can help you file a complaint. If you call or e-mail us, we can provide you with information like if you just have a question about a rule, we can provide you that information or if you want help filing a complaint, we can help do you that as well. We have an ASL customer support line. So it launched in June of 2014 and it allows deaf and hard of hearing consumers engage in a direct interactive video call with a consumer specialist at the FCC who can provide assistance in ASL. And that person can also help file a complaint or give you information. So if you want to call the ASL line direct — ASL line directly, you can call at 844-432-2275. And the hours of operations are between 9:30 a.m. and 5:00 p.m. Monday through Fridays. Regarding access to information, we have a lot of information on our website and we try to keep the website up to date even up to the moment and our website where you can look through all of these topics that Suzy Eliot and Diane has been talking about, each one has their own link is FCC.gov/accessibility. And in each one of those categories are consumer guides that will summarize the various rules for each of those topics. There’s also, um, there’s also links to the actual rules themselves if you want to read the actual rules. And then there’s also information near the bottom on most of those pages called headlines which is the most receipt activity that’s happened on that particular topic.
I want to mention a few other interesting resources that you may want to look at. So that would be every two years, we have to file a report to Congress explaining what has happened in the last two years regarding — regarding communication services under the CVAA. And the last time we filed, we sent this report to congress on October 7th of 2020. And that report lays out the progress that we made inaccessible communication as a country and this report to Congress also includes the types of complaints that we received and how those complaints were resolved and it’s a pretty interesting read. Once I’m done speaking, I have a link to that report to congress and I’ll put it on the chat, but it’s always available at our website, which is FCC.gov/accessibility. And just for reiterating, if you would like to just e-mail us and ask for a copy of this or ask us any questions, you can e-mail us at DRO@FCC.gov or you can call us and we will be happy to provide you all sorts of information. So
So one more thing and I will pass it back to Mark. And that’s if you want to stay up to date on the FCC’s activities, you can sign up for our list service. We send out e-mails on things that are happening at the FCC regarding accessibility and you can send an e-mail asking to subscribe to the e-mail address access info@FCC.gov. There it is. Thank you, Mark, for posting all of that in the chat. With that, I invite everybody to file complaints if you see something that you believe is violating our rules, and feel free to reach out us to at any time to ask about FCC rules and issues.
>> MARK SEEGER: This is Mark. Thank you so much, Will. I appreciate everyone’s contribution this afternoon to our conversation with the FCC. Diane, Suzy, Eliot, and Will, this is such great information and very valuable. I really appreciate your contributions today. And I see that many questions have flowed through the chat from the audience. There are people that do have answers online and I would like to also say thank you to all of our sponsors for making this panel possible today. I look forward to joining all of you at the next fireside chat. And then the closing meeting this afternoon. Which are the president’s reception at TDI and the awards ceremony. And I’m hoping that you all enjoy the rest of your conference today and throughout the week. And that we all reset and reconnect. If you would please all of the FCC individuals, our panelists today, would you please turn on your video so that we can see you again? I thank you so much. A big hand wave and applaud to you. Thank you so much for your contribution to accessibility for individuals in America. It wouldn’t happen without each of you. Thank you so much.
>> SUZY ROSEN SINGLETON: Thank you.
>> MARK SEEGER: Enjoy your afternoon.
Fireside Chat
Bobbi Cordano, Barbara Kelley, Howard Rosenblum, and Kaika
Transcript
>> ERIC KAIKA: I like the format of the conference, but I thought it would be a nice feature that I called Fireside Chat where I would invite you to have a conversation, learn more, ask you questions, learn any the organizations. I know that we have worked with your organizations in many different ways, but as leaders, sometimes we have a broader view of what is happening today and that is the goal of this form to ask questions based on your experience from the organization, from your personal experience, for your audience to share with our audience at the TDI conference. So thank you for coming. My name is Eric Kaika. I am the CEO of TDI. I am a white male with a closely shaved beard. I have thinning hair on top in a brightly lit room, you probably can’t see anything. So essentially I’m bald. I’m wearing a purple shirt with a dark sports coat on. We’ll go ahead and start with introductions and enter the chat.
>> BARBARA KELLEY: Thank you, Eric. I am Barbara Kelley. I am the executive director and I am a white female. I have blonde hair, I’m not wearing glasses and I am wearing a bright yellow dress. It’s a bright sunny day.
>> HOWARD: This is Howard Rosenplume. I work for the national association for the deaf. I am bald with a beard. For the first time since COVID, I hope people like it. I’m wearing glasses with a blue suit, a blue shirt, and a pink tie. Not sure why I chose pink. But I’m glad to be here. Thank you, Eric, for inviting me.
>> BOBBI: I am Bobbi President of Gallaudet University. My pronouns are she, her, hers. I’m a female and I’m wearing a gray suit with a blue blouse. I have blonde hair. I am wearing glasses. And, um, I’m just honored to be here with all of you with this great group of company.
>> ERIC KAIKA: So the last two years since we had our conference in 2019 to today, wow, you have the number of incidents that happened, the world quickly pivoted and changed impressively with what we’ve done. Our traditional ways of educating children, working, advocating, all of that has had a tremendous change and we’ve shifted to a virtual way of communicating. I think that we did good, not perfectly. But we adapted well. But I have noticed that there have been several issues with virtual communication. What are some of the barriers that have shown up, but there are still lingering issues and how can we make changes to improve on that? Go ahead, Howard.
>> HOWARD: You’re exactly right, Eric. The world has changed drastically. Many of us have been in advocacy for a long time and we have seen the same familiar issues and barriers that we all know. Sometimes with new technology, we have is to figure out how we make it work for us, but over the last two years, that changed happen so quickly on so many different levels. Both for college education and elementary K-12 education, early intervention, health care, work both in-person and remote. Everything you can think of has changed for the world, not just deaf people, but the world including deaf and hard of hearing people. There are many barriers that are simple such as masks to more complex barriers like technology and accessibility has not traditionally been part of the system. We have been reactive and unfortunately, that’s the case. We react to things as they change and then we have to remind the outside world to think about us and include our needs such as captioning, communication, sign language, so many different levels both technologically speaking and in person. As changes happen, we have to educate ourselves and in turn, educate the outside world to help them accommodate our needs. Sometimes they listen. Sometimes they don’t. But so much has happened in the last two years. Much like you, I’m impressed with how we’ve done. We have a lot more to learn but our response has been powerful over the last two years.
>> BARBARA KELLEY: I can really relate to a lot of what Howard said and I don’t know about all of you, but I found that in the pandemic’s beginning, we just got busier. We had people tell us that their lives are isolating enough and then you add COVID on top of that, it really made it more isolating. And a lot of things we have people asking is I need communication access now. I need it because I’m taking classes or I’m working remotely or I need it in healthcare settings. We were really happy to work in your organization to come up with resources where we pull our range and all the organizations together and that’s why I love the collaboration with all of you to put out really solid resources for people. But I think one of the main things that we told people was more rights as a person in hearing loss just because (inaudible). In the Americas with disabilities act, it doesn’t go away. So, um, we found that people needed more.
>> I want to emphasize collaboration. I saw our faculty being heavily involved with accessibility and telling employers and health care providers what they needed to do in order to be accessible and to be able to see that collaboration was so powerful especially when we come together, those who experience a hearing loss, who are deaf or deaf-blind, you know, any time that we come together regardless of the way that we communicate and really look towards making sure that everyone has access, you know, I think about the whole virtual experience being very different than face to face interactions. When we moved ourselves to the cloud, when we went virtual, so interesting to see that there were some gains that came with that, but there were losses that we experienced. One of those experiences of loss was that whole sense of isolation. For many deaf and hard of hearing and deaf-blind people, we rely on to be able to lip-read and read people’s facial expressions. Even when I go shopping to the grocery store and the doctor’s office, we can get by and we manage in those environments, but when you bring in the concept of people now wearing masks, it completely disconnects us and it was such an isolating experience especially for those of us who are deaf, deaf-blind, have a hearing loss and then those who use ways of communicating and that became an issue because people needed to be 6 feet apart from each other. The New York Times said that the CDC’s advice and guidelines when you look at all their mottos, they’re stick figured individuals and they look the same. But when you realize, we’re not all the same. We’re not like everyone else. For the adult’s dough pictured in the pictures doesn’t necessarily mean those people who are deaf-blind and how you communicate with them face to face and sometimes having to kneel down. When they say you need to be 6 feet apart from one another, sometimes that is completely impossible and we were not able to provide that advice and guidance related to that. So when we think about the whole experience of what happened with our deaf-blind students on the campus at the beginning of the pandemic, there was no other university experiencing that because they don’t have the critical mass we have. I look back because I realize it gave us an opportunity to be able to work together, to learn about what we needed to do, and to refrain some of their rules about the federal government for who gets considered to have priority for getting PPEs when we think about interpreting services that interpreters weren’t considered to be first responders like you have that for health care providers. So they had to wait to be able to get access to those supplies and equipment they needed for safety and their work environments. So you have that loss and disconnect that came from the whole experience of not being able to connect in the ways that we knew how.
When you think about Zoom too, we do a Zoom check to make sure you’re within the frame of the video and that people can see your signing. So do you that by holding your arms up like this and making shire from the elbow to the top of your fist is in the video frame. We have been using these techniques to be able to communicate using VRI, using video technology. Then we have the families that are working with us and sometimes when you think about the families, they don’t necessarily know how to angle the camera so you’re best in the frame. That’s what we do as deaf individuals. We were able to adapt much quicker than other people. So, for people who are hard of hearing, deaf-blind, and deaf, there have been dividends we have reaped from the facts we have experiences and had to adapt. I recognize that within signing environments, we saw the benefit within the signing community that was really different. When we think about people wearing masks, you’re still able to sign and get by. It’s not the ideal as signing individuals. People would prefer to be able to read my facial expressions and have access to that, but if having to wear masks, you still can communicate with one another and you can get by and make certain adjustments like slowing down your signing piece so people can understand you better. But when we talked about early intervention and teaching sign language from 0 through 3, the whole critical point about language mapping and brain mapping and what happens in the brain with bilingual development, you like at statistics, 1 out of 8 children who are over the age of 12 experiences a hearing loss. It means hearing loss is a human condition. This experience taught me that we need to really be investing in sign language and making it normal. It needs to be normalized in our country so there can be communication engagements that regardless of wearing masks or not, you’re still able to get by and be able to sign that allows people to be engaged and connected. Those who are hard of hearing, deaf or deaf-blind. There are enough of us and the difference in the distinction we have here in our university compared to elsewhere is I prefer to be here on campus because having to deal with people wearing masks outside of this campus is not something I want to have to deal with.
>> I agree. As you were talking, I was thinking there was a piece of data that I read that the third most treatable, medical condition was hearing loss and that more and more people are losing their number. Number 1, heart condition, and two was something else. So when you look at the population growth, it’s starting to slow down. And the number of people with hearing loss is increasing. So I’m thinking that at some point, I know that I read HLAA’s annual hearing loss report last week I believe. And I’m pretty sure there’s going to be an influction point where the number of deaf and hard of hearing people are at some point going to exceed the hearing population based on the current trend. So you’re right. It’s very important to be aware and mindful of people who sign and I agree with you that, you know, we have already been working with DH — the DHH coalition, but, ah, in the past two years, I’ve seen that galvanizing more organizations coming together that may in the past have been overlooked and I really commend all of us and all of the communities out there as well. Bobby, did you want to add something?
>> BOBBI: I want to be careful, but I find it interesting the language that you used when stating that it’s the third most treatable condition. I mean, what’s interesting because what we don’t often recognize is that the other two conditions I think are hard and whatever the second one is that they’re chronic conditions which means they never go away, completely go, away. You can mitigate it. You can mitigate the experience. You can provide technology. You can provide language and cochlear implants and other types of technologies. You can do things to mitigate the experience, but it never completely goes away. So the whole experience of — even when you go to bed at night and you take off all of your different technologies and assistive technologies, you essentially become deaf. And that whole experience is still present from that person. And every day that person is living with this chronic condition, one that persists, you learn how to adapt your life to that lived experience, but it does take energy. It takes thoughtful engagement from others to negotiate those spaces in which you cohabitate. It is a constant negotiation process that occurs. I think that this collaboration in particular that has happened now especially with the pandemic with COVID is that it’s been so powerful because of the fact we have a shared lived experience. We think that the important thing is not about whether you sign. That’s not the important distinction. What’s important is us coming together due to our shared lived experience of having to negotiate the spoken language environments. That’s something that’s constant for all of us and I think that’s something we should continue to remind ourselves.
>> ERIC KAIKA: People with disabilities are natural-born engineers. We come into the world ready to find solutions to problems we encounter.
>> I am naturally born and some college-trained engineers as well, but speaking of technology, Bobbi brought up the power of collaboration, which has been important. Prior to COVID, we had to reach out to innovators and talk about universal design, but COVID made universal design far more apparent. For example, video conferencing platforms that were all using to collaborate nowadays. We had to compare them. What platforms had and didn’t have. Captions, auto-captions, the ability to connect to C.A.R.T. and pin interpreter as well as elementary education and college education especially for mainstream schools. In a deaf classroom, the professors know what to do like Bobby mentioned holds your hands up to make sure you’re clear and your signing space is there. As in classrooms where a single deaf student needed captions and interpreters, you have to make sure captions are enabled before the meeting starts otherwise they won’t work. If they’re walking with an interpreter and you have 30 tiles on the screen, how do you see the interpreter? We had to learn how to hide non-video participants. Other ways to pin interpreters so you can see them at the same type as you see the class, all that we had to figure out in a very short amount of time. Educational institutions were trying to figure out what to do. Employers were trying to figure out how to accommodate their employees. TeleHealth, for example, with doctors not allowing people to come to their office. So they had to start doing remote clients. How to figure out adding interpreters to their TeleHealth conferences. It made universal design and the gap become very apparent and it spotlighted all of that and hopefully, it will lead to change in the future as we see now with interpreters. They’re almost everywhere. We see them with governors during press conferences. And a lot of that is because of our community advocacy. Some of it is because of lawsuits, love doing lawsuits. And some just the communities gotten used to it as we saw recently unfortunately in Miami where the condo collapsed. The mayor provided an interpreter during that press briefing. They did. So it’s becoming the new normal. We still have more work to do, but hopefully, that work continues and that will be common like we see every day at the White House with the press briefings. They have an interpreter and that’s becoming the new normal.
>> BARBARA KELLEY: That is definitely some positive things that have come out of COVID and technology certainly provided some paying points for all of us. But we found that older people were willing to be pushed into technology into using the virtual meeting platforms. We had chapters across the country. We always said someday let’s have virtual meetings, about people getting pushed into this and it was all a really good thing and, ah, I would say there are things that come out of the pandemic.
We were also talking about increasing numbers of people with hearing loss. In the United States every day, 10,000 people turn 65 and we know that hearing loss and age go together in the age group, 65, 751 in 3 have hearing loss. 75 as well as 1 in 2. So the numbers growing and this is baby boomers aging with hearing loss are not like the older people of yesterday. Today seniors are still working. They’re having second careers. And hearing loss is becoming a part of overall health and we’re finding that people want to take care of themselves. They want to do things they can stay engaged and stay active and (inaudible). Pandemic brought some technology paying points, but I think it pushed us in a really good way.
>> ERIC KAIKA: So on that note, I find it interesting how things come to a point with the pandemic and now legislature they’re talking about the 21st-century Communications & Video Accessibility Act. They are talking about refreshing our modifying some of the languages so now that discussion is happening what should we be sharing with them as they rewrite the legislation? How can we make sure we continue to have technical access for the deaf and hard-of-hearing people? I see, Howard, you have a great big smile.
>> HOWARD: We need to stop looking at specifics and have a list of exactly what we need to do. I think we need to look at the bigger picture. Any new technology will be accessible. Period. The end of the story is what we should say. We shouldn’t have to say because of new technology, we have to set up these laws. It should just be across the board any new innovations or technology should be required to be accessible. That’s the easiest way to do it.
>> BOBBI: They come from various organizations such as yours with the support of research conducted such as Gallaudet where we can support your organizations in setting the standard and documentation from corporations and nonprofits and the public sector. The research can supplement whatever you’re trying to make the case for. A perfect example is within the airline industry. Many of us have been on a plane before and you try to watch the movies. Are all of them captioned? I don’t think they are. It’s fascinating tongue about that. Even the TV shows are not captioned in that setting. Let me ask you, Howard, is it because the screens are smaller than 13 inches, or what’s the reason for the lack of captioned shows available? (multiple speakers at once)
>> HOWARD: I will have to come up with another answer then.
[Laughter]
All rice. Take it away. It’s not a screen size issue. The fact is planes follow different laws. The ADA does not apply once you’re in the air. The ADA only applies to the land United States. So they follow the air, carrier, airlines follow the air carrier accessibility act, the ACAA. And that law doesn’t mention anything about captioning or making TV programs accessible. It’s a very old law. I believe —
>> BOBBI: It’s an antiquated law.
>> HOWARD: All of that.
>> BOBBI: That was established before that came out. (multiple speakers at once)
>> HOWARD: The thing is the safety video at the beginning. Sometimes they’ll do it live where they’ll point to the exit asks directly you to where things or a plane and so they don’t have to make accessible, but any videos for the safety videos have to be captioned. it doesn’t mention any other information. That’s why a few years ago when we were working with a disability group, we were trying to address several different issues related to accessibility on the airplane. For example, wheelchair accessibility for the toilets. Emotional support animals and also we try to address IFE in-flight entertainment. We tried to file a junction with the FCC arguing that TV shows are covered under the CDAA and the FCC declined to pursue that, unfortunately. So that’s the short answer.
>> ERIC KAIKA: You know the act as well I believe prevents people from using the airlines. I remember at one point they refused to allow a deaf-blind person on the aircraft because of COVID and they discriminated against a deaf-blind person and there was nothing he could do.
>> Howard. That’s one drawback to that old law the ACAA, the air carrier accessibility act. It stipulates no private right of action which means you cannot sue the air carrier. You can file a complaint with the department of transportation, but all they’re going to do is give them a slap on the risk and tell them to do better.
>> BOBBI: (inaudible) especially because people have stopped traveling due to COVID, let’s get back to talking about what Barbara mentioned about hearing loss and experiencing isolation during the pandemic and what has changed you to COVID. There’s been positive experiences, but also some challenges too that we’ve been presented w the positive side of COVID is that it taught us how to be a virtual community. We’re able to find each other virtually in the cloud. We’re using communication technologies and strategies very differently now. Even with my family, we have a mix of deaf and hard-of-hearing people. At Thanksgiving last year, we all got together and I was in a house with some of my hearing relatives and they were sitting around the table and I that were okay with having laptops and computers in front of them and I had my iPad in front of me on a stand. My son had the same thing where he had his on a tripod and there were relatives there who were deaf and hearing elsewhere all joining us through Zoom for Thanksgiving. Any time anybody spoke, it was captioned in front of me. Nobody had to interpret. I could enjoy the conversation by having all members of my family together and that was a first-time experience for me. It was a virtual dinner. It doesn’t replace the face to face, but it gave me that level of access to others in my family, which was very powerful. I think that’s been true for many people to have that experience. It’s taught us different strategies that go about influencing how we think about getting together in person and what we can now do differently. People will have their phones in front of them so that the person who is next to them can have captions. What can we continue to innovate here?
I think another term is that we’re all hackers. When we see something, we’ll make sure to change it to accommodate and fit with what we need it to be. I think hackers are the term the younger generation likes to use. Engineers is an out of style, out of 28 terms, Howard, but when we talk about engineers, I think they’re more hackers in that way and they like that term. I think that COVID, the pandemic has really amplified the deaf experience people have with hearing loss of really losing that sense of community. You talk about people with hearing loss who already felt isolated and now think about wearing masks, it is really difficult to talk on the phone. That has made video communication really the most important thing. It’s what Steve does, I think. But that face-to-face conversation allows you to have interactions with one another where you can ask people to repeat themselves and it’s really developing that relationship in a very comfortable way. I don’t think that Zoom allows us to do that. When you think about the younger generation and with children and students, one great example is that there’s a college student who was at a hearing college before the pandemic. And then COVID hit that. The student was almost failing their classes because they couldn’t get the technology and things were not accessible. She really struggled. So she decided to come to Gallaudet. She transferred and has been able to flourish. She can communicate with people through the use of signing language and see one another. So we have the experience already under our belt of using technology and thinking about how we open our conversations and checking to make sure people can see one another before we even start. If you think about the student, how many are there like her who weren’t able to find programs able to adapt. Learners have been left behind. Especially during COVID. So we have to worry about that. And we have to be sure that our students and our learners and charge, if they’re experiencing this, this is probably true for most everyone. It doesn’t replace the direct learning that you get when you’re in the classroom with your peers if you are doing it through virtual technology, but we have learned that some of our younger students through this experience, we have learned they actually have been better at advocating for themselves with teachers to tell them what they need. When they’re at home, they don’t have people who are there doing it for them throughout the day. So our support staff and our teachers and professionals at the clear center realize we were doing too much for them before. They allow them to be more autonomous and independent because they know students can handle it. It’s a mixed bag of gems that have come from this but realizes where we have work to do to get ourselves caught up. I think the last thing for me that I remember is having the experience from when COVID first started. We had a couple of faculty who were talking about COVID and they did this in sign language. So we opened it up for everybody to watch to talk about what is the implications of COVID and 40,000 people from around the world viewed this and watched this. 40,000 people. So it had 40,000 views which I think, you know, goes to show that there are people who are out there who need information from us that needs to be delivered in a way they can receive it, they kind it, it is digestible and they kind the impact that it’s had on their life. There is more to come in that way where it is going to be our sponge to continue to share the information and share the resources with people and the expertise that we have. Barbara, you talked about those numbers and it is staggering how many people are getting access to that information from what your organization has compiled. There are different ways we can go about improving that.
>> BARBARA KELLEY: During the pandemic, they’re aware of the hearing loss and they were having to wear masks and stay 6 feet apart and then started to realize they needed some type of help. People were then inspired to go get some help which is good. So you look at the bright side of things as you have noticed, but —
>> ERIC KAIKA: It’s your outfit. Very bright.
>> BOBBI: So, can I ask you, Barbara, because I wanted to hear more about that. Maybe I am so preoccupied with COVID and getting people back to the university this fall and this summer. You can tell me about that experience of hearing loss? This is the first time that I’m hearing about that. Maybe I’m just behind.
>> BARBARA KELLEY: People talking about masks and masks reduces the decibel level. We heard anything from the 5 decibels and even up to 25 decibels. So then you add 6 feet of physical distancing and people are really starting to experience heart loss and they realize, you know, kind of where we’re at with hearing (inaudible) and they don’t understand — we all lip-read. I think everybody lip-reads a bit. And lip reading is a tool, right? So you take that away and then people started to realize that they were doing more lip reading than they might have realized.
>> HOWARD: I think it is similar for the signing community. You mentioned people who didn’t recognize their hearing loss, but deaf people are used to that and schools. We get so much access given to us and then when COVID happened, we didn’t know what to do. I think a lot of people realized that their rights had been taken away and were trying to figure out what to do at this point. It resulted in 40,000 people watching their webinar prior to COVID. We probably had about 1,500 contacts a year and since COVID happened in the past year, we had 4,000 people reach out to us. More than double and almost triple. So we realized with all of us have worked to the bone since March, April, and May trying to put those resources together and what concepts of people’s are, and how to use technology. We rush to put all the resources together for people to use and empower the community to support themselves. We have seen sign language interpreters readily available at governor’s press conferences and that’s been a huge change. And I think a lot of people have taken the steps to take back the power that they felt they lost the rights, and they’re advocating for themselves again. That’s the new world that we’re living in. Deaf and hard of hearing people are usually left behind in terms of getting information. Once it is disseminated, we’re usually the last to get it. And we have to fight to get that information? We as organizations have made that available and we saw how hungry the community was for these resources. So that’s been a big change. Before I move on, I do. To recognize two specific groups that are out there that are still being left behind even during COVID who report getting services — aren’t getting services. Deafblind individuals who are already behind the deaf community now are struggling. They need that tactile touch. Technology is not accessible to them and they can’t always get that because we’re doing social distancing. We have masks and various other reasons. And also the second group are people working on the front lines in hospitals, apartments, banks, grocery stores, Amazon workers, many of whom are deaf or hard of hearing are having to go to storehouses and work and they’re missing out and they have to deal with people that are wearing masks at work, they’re earning less on an average basis. Make ends meet as best they can try to figure out how to survive during COVID especially people who have lost their jobs that are struggling financially. There’s a lot of recoveries that we will have to do. We can’t just say that COVID is at its end. There are people still suffering.
>> BOBBI: I will add to what is required of us. Especially over the last year since last summer on the murder of George Floyd, going forward, we must focus on our recovery and what’s needed. Do you know? We’re still going through it but don’t be mistaken. I know we’re all vaccinated here, which is a good thing and we all agree to take our masks off for the panel conversation. At the same time, we know the variants are out there and we don’t know enough about whether or not our vaccines will be able to protect against the variants. We’re not out of the woods yet.
As we continue this fight to restore ourselves and to get back to community living, once again for what we lost, it was really the aspect of community and the workplace within the community and even our home environments, as we work to bring back that sense of community, which I think is really at the heart of human existence, that humans really live and thrive when they’re a part of the community is really fundament eel a part of who we are in our nature. As we look to be able to rebuild our community, we have to make sure that we think about intersectionality identities and equity and use that lens as a part of this work. We need to make sure we look at every single decision as an anti-racist plan. Asking did every single person receiving this service or who was in a part of this class or organization receive equitable opportunities, have equitable experiences and have equitable support? When we think about racism and the impact itself had on this country and understand the impact it has had on people of color across this nation, they have not had the same access to opportunities to knowledge and support and at one point, I remember learning that 68% of young black men inch cars rate individual a learning disability. And I often wonder knowing that it is such a high statistic, how many of them actually have a hearing loss that’s been undiagnosed? So just them not understanding what the rules were within the class and it was simply because they did not understand or hear the instructions. Maybe the teacher asked them to do manage and they simply didn’t hear T. one of the last things that people will diagnose is hearing loss. They’ll diagnose other conditions and behavioral issues before saying maybe this person has a hearing loss. So it’s pervasive. I think the opportunity that we have before us is that when we look at the inequities related to easy is, technology, and knowledge that people have from our organizations have provided, I think our challenge is going to be taking a look and reframing this as one of the restoring, rebuilding communities that we’re not looking to do the same thing as before, but we’re making that shift and now is the time to do it. I think the motivation that people have is there and the imperative is there as well. And our communities have spoken in this country with Black Lives Matter. I think that it’s time for us to be responsive and to learn together how to do this work going forward. I think that’s going to be important.
>> ERIC KAIKA: I agree with what you’re saying and to paraphrase with what you just said, accessibility is a civil right, it’s a human right. People in this country should be able to have access to the technical information they need and people who are deaf and hard of hearing have long been fighting for this access, but people of color and those from other marginalized communities had their own battles as well.
Technology requires financial support. It requires access at home. It requires a lot of things that are in play to make sure that everyone has an equitable experience that our service, our products are inclusive and equitable for every individual. I think organizations have recognized that we have collaborated well. We have been changing and trying to work with these communities to elevate them. As we modify the laws and make the policy proposals, they do benefit the deaf and hard-of-hearing person. For example, the most restrictive person, if we design accessibilities for them, they benefit everyone.
>> HOWARD: There needs to be universal access to the internet. Right now, many students are doing their education from home virtually, and do they have access to the internet? Or a computer or a device they can use? Maybe they have one, but they’re living in a household of 3, 4, 5 people. Many schools were able to hand out technology and equipment including my FI to students. We recognize or many of us do that technology and phone companies need to lower their cost of service especially to people who are underserved. And those low-cost services, are they enough for deaf and hard of hearing people? They may have a data plan on their phone they’re maxing out because they’re using video services, to use sign language. So they’re capping out on their data plans. We have a similar problem with deaf individuals that want to do school through video making sure they have access and you’re exactly right. Those who are the most limited and impoverished should have access to basic needs. We’re talking about basic needs. You need to live. A roof over your head. Water, food that should include health care and internet.
>> BOBBI: I think we might need to broaden our conversation here a little bit too. We talked a lot about civil rights. We talked about laws and making sure we have access. I think that you’re right, Howard. When you were talking about universal design, making sure that it’s really about inclusive design and inclusive design is not necessarily — I mean, when you look to see how that’s different, it doesn’t necessarily mean that bee has to conform to be able to fit with what the norm is. The norm needs to be changed in order to be able to accommodate and fit with many different types of people. So that’s a paradigm shift that needs to happen. And a lot of our laws and decisions that have been made thus far have focused on how we as people have to fit with what the norm is and be given for us to operate the same way. I think it is more is about shifting the norm so we can thrive in any type of situation and that’s a paradigm shift that’s not happened and I think that’s going to be our work going forward. The second piece is economic justice. We now have come to understand and doing some preliminary research and looking at the data and doing some analysis here that we have estimated that the sign language economy in the United States is worth about $2 to $3 billion. And that — it’s really about economic justice in terms of opportunities that are afforded to deaf people in this country. If you look at those economic opportunities and economic justices as it relates to health care and access to hearing aids and technology and you look at the cost of aided technology, when children are diagnosed within a school system, it’s great because insurance can then cover what they need and we can give them those devices. But not always, but more likely it will be through the school district and the companies can get coverage. There is some struggle there. It is not always — when you think about what has happened with people in the prime of their careers — people who were superstars salesmen and all of a sudden, their careers were rises and then they lose everything. By the time they’re in their 60s, they completely lose everything. Their job, self-esteem, their families, their spouse, and they lose everything. It was all to have a hearing loss. The impact is realized on their family. It could be they do have to figure it out, but when we think about others who are in janitorial services who also need hearing aid technology. They can’t afford a couple of thousand dollars for a hearing need. Some can cost up to $4,000. It is very expensive. So we have to have this economic justice frame for looking at people’s experiences with hearing loss and being deaf and deaf-blind because, without that, civil rights is not going to matter. Unless I can afford to receive what I need, I’m not going to be given the same opportunity to be able to be a part of a community and we have to see ourselves as being a part of the larger fabric of the community of this society and how we can do things with each other. When we look at spoken language communities and environments, how they then can build consensus to providing the economic rights and opportunities for us to be able to have access and doing things with the rest of the community, that’s important too.
>> BARBARA KELLEY: Hearing loss is a tough nut to crack and I think that we need all our organizations, not one organization to do it, not one law to do it. And I think there are laws in place that help and our organizations are great advocacy organizations, but sometimes it comes down to individual advocates. And we know that there are great individual advocates who ask for their rights, they know their rights, they know what to advocate for. And a lot of people have been advocates. There are some days when I will send a that bad meal back to the chef in the restaurant and there are other days where I am too tired to do that. Somebody is in a health care situation where they might fail and get access. They might not have the energy.
>> HOWARD: On top of what you both just mentioned, we have to look at the broader cost of living for deaf and hard-of-hearing individuals. If I want to spoke alarm or a carbon monoxide alarm or even just an alarm clock to get up in the morning, all of those are far more expensive than buying regular alarms for your house and you can’t go to any store. You have to find a specialty store to buy them. And that’s where universal design comes in with cost standardization. It should be scaled. Of course, now with different technology, IOT for example, if we want to know when it is washing machine is done, right, the internet of things. Thank you for that. We need to be able to know when the washing machine is done be it your laundry or your dishes, what have you. They beep. I can’t hear it. Most deaf people go back and forth to the laundry machine to check when it’s done.
>> BOBBI: Of the just to make light of that, Howard, one of the stories within my own family is that my mother was deaf and any time she started the appliance, her body instinctively knew the moment that it was done. My hearing sister said how do you do that? It’s just something it is part of who I am, second nature. Somewhere along the way, we have lost that skill. Obviously, it was a life skill of hers. She was good. She knew exactly when 8 minutes was up. For me, I set the clock, set the timer, and then I walk away.
>> BARBARA KELLEY: Forget I put that in there. That’s very true because I heard people have stopped reading paper maps. So our campus is shrinking. So get out those paper maps and started using them. (multiple speakers at once)
>> HOWARD: I have to get a map to find things.
>> BARBARA KELLEY: I am lucky I’m at this Gallaudet campus today. It’s true.
>> ERIC KAIKA: Right. And deaf-blind individuals too. Right. Even the FCC doesn’t provide —
>> HOWARD: Even the FCC doesn’t provide accessibility for individuals to make phone calls. They said sure. Here are video relay services. Free video phones for everyone. Free IPCTS for people who need captioning. So the only ones that are very poor get stuck behind and they have to pay for it. Deaf-blind equipment is very expensive because it is such a small market.
>> ERIC KAIKA: (no audio)
>> BOBBI: I started to have different conversations with people about how we can rethink, how we — you know, raise our future generations. And I’m thinking about several different villages in the world; one in South America, one in Israel, and even our own Martha’s Vineyard at one point. And there was a researcher that said if you have 3% within that community who are deaf, they found that within the small villages, what happened is that everyone would sign. You only needed to have 3% ever the population be deaf for that to occur. If you think about the statistics here in America, that number is far greater than that 3% and within these villages, even with our own Martha’s Vineyard, the researchers were having people within the community why they signed and the response was why wouldn’t we sign? So it was a fascinating perspective that we kind of lost that here seeing if the person is different than me, how can I work to engage them and to be with them and work with them. The cognitive research shows that if you teach all children sign language from the age of birth through 3, everyone benefits as a result. And I did ask our researcher at Gallaudet University, I said let’s say that you expose people to sign language from birth through 3 and maybe even up to the age of 5 when they start to go to school and then they stop at that point where they’re not learning it any longer in school and they go off on their own lived experience, but then when they get to be within their 50s or so, what would happen? You have a person who has never been exposed to sign language from the age of birth through 3 compared to a person who was exposed, they both may fail within that first test at a later age as it relates to signing fluency. It would be the same probably. But the person who was exposed is going to pick up the language much quicker than the person who was never exposed. Because of the fact that person who had been exposed to sign language from birth, their brain is mapped in a certain way to be able to take that in and to be able to develop a language. We think about our own systems within education, I think that’s the long-term game here. It’s that 50-year game for us and I think we need to have more of those 50-year games where I guess you can call them strategies. We need to have more of those 50-year strategies that we start to use. Signing with everyone from birth. That relates to mental health. It is tied to brain health because we know that the visual experience of using sign language enhances certain complex functions within the brain and also goes to present prevent different diseases such as Alzheimer’s and who knew. That’s how we can really change the whole view of ourselves and transform that view of seeing ourselves deaf and hard of hearing people who, you know, who hack or who are different, who have been marginalized to then reframing that to say we have value and our lived experience is of value to the rest of the world. In those villages they spoke of earlier, they saw that deaf individuals were valued because of the fact they were human. Neigh never once thought about excluding them even with having 3% of the community being deaf. I think we have lost the way that we value one another and it is time for us to be able to find that again in our practices and our policies and our discussions. Yeah.
>> BARBARA KELLEY: I agree with that. Human beings’ early stages till deaf are so important. There are so many great points. I can forget so many things you have all said and we talk about marginalized communities and for deaf blinds (inaudible). I knowledge it’s an important world — I think it’s an important word. It brought out broadband and I remember when the pandemic began, there was such a surge in caption services. The demand was overwhelming and those of us that were in the know of caption services, you know, we did it to stay connected. (inaudible) in your areas. People have access to health care or hearing loss and have access to an audiologist or any specialist.
>> ERIC KAIKA: For me, it is frustrating to see. It’s a basic need. If you think about how America was founded, everyone should have that pursuit of happiness to be happy, to have access to education, to healthcare, to communication in general. So I think it goes back to what you said earlier. If you create new technology, make it universally designed for everyone and that concept has to start young too. It has to begin with our children. I use my daughter as an example. She’s 5 years old. She can hear. I’m white. My wife is Asian, Japanese specifically. We have taught our daughter four languages. She has just a skill when she interacts with children of other backgrounds. And I think it’s because she has that access and I can see that with her. Can I imagine if the world was exposed to other languages, one or two other languages, one of them specifically being a sign language? We then sign language is not a universal language. I get that. But the ability to gesture. As a deaf person, I go to Italy and someone is talking to me, oftentimes they’ll gesture and make connections to understand what we’re communicating. If I go to South America, the same thing. We have an understanding. America, not so much. You know, I gesture and they look at me like a deer in headlights. So we need to give children that access to language and that in turn will elevate the community.
>> HOWARD: I totally agree we need to expose the youth to different cultures and different languages but also we need to start exposing them to different people their backgrounds and needs including disabilities. For example, as an engineer, I follow how technology changes. And sometimes people mean well, but they go about it the wrong way. Make a remote, a TV remote so much more simple. I figure okay. Why not make it easy for hearing people and blind people. No more buttons. You can just talk to your remote. Okay. But deaf people, we’re sitting over here and then it’s this big rigmarole. Well, we still have the supply of old remotes and we get left with the leftovers. So this example of creating universal design hasn’t been effective. You created it for one universe, but not considered everyone else. It shows a lack of imagination and a lack of exposure of these engineers to people who have different needs or a different way of living their life. If they were exposed to that and they saw it, they would get it and include it in their design. It should be taught in all fields, architecture, for example, needing to improve physical accessibility, medicine, needing to understand different approaches for hearing loss or disabilities. Technology and design of technology as well as internet accessibility. We talked about WCAG. A lot of people who are designing programs and are technical engineers don’t even know WCAG, which are the guidelines to make the internet accessible.
>> BARBARA KELLEY: We are associated with key access. That’s pretty exciting. Those were a couple of things that I had. You’re so good. Key access and that’s exactly what it is an organization that is trying to get students to, you know, set up retro devices so they can get accessibility. There are a variety of different ways. We’re there as disability organizations to give our input, but I find the discussions fascinating.
>> HOWARD: We need a lot more organizations like that. Teach America is a great program and we need that in every elementary school, middle school, college, and even post-graduate programs. That way they become sensitized to our needs and other people’s disabilities. Teach access is just the beginning. It’s a great model.
>> BOBBI: You know, it’s fascinating because that just reminds me of some of the work we have been doing here at Gallaudet, and over the last year, we established two new centers and we had the technology access program for several years looking at access to technology. We did research on digital technologies, we had the light lab. We have work that’s happening in the digital sphere and a lot of developments in that area. But over the last year, it has really shown me more about thinking more broadly than just technology. And it’s fascinating because when we look at the advocacy that our organizations are doing together especially during COVID, that has brought forth research and information to look at the core of application for how it can be used in various areas like businesses, services, the government, and health care. It’s gotten careful in that way. Now understanding a couple of other things. For example, in the last year, we set up the center for black deaf studies and this is the first time that we have done this and it was so fascinating because I saw a story that was shared where we had two older – well, one is on our faculty the director of the center and then Gerald Miller talking with Dr. McKaskell sharing his story about his living experience in 1992 after Martin Luther King, Jr. was assassinated. Gerald talked about this in a very powerful way. They made an announcement that everything was going to be closed and transformation had been stopped and the buses were no longer running. He had to walk 12 miles to his home and didn’t understand what was going on. Both of them shared experiences both of how they felt left out of the civil rights movement and on and it’s because they didn’t have seas to information, captioning, they weren’t interpreters at that time. So the black deaf community, community members are feeling their movement is in a very different way. Gallaudet, we are learning how we have not engaged included, or even had that movement in the same way. We still have a lot of work to do as a result. But listening to these stories especially when I think about 2015 NAD, working with the black deaf community realizing that NAD did not include the black deaf community and they were isolated and disconnected. So we recognize there is still work to do going forward and Gallaudet is doing its work with anti-racist work. This is one area where I see that it’s just started to make me realize the different stories are out there we have yet to hear that is going to continue to shape the future. When we listen to the stories, that have gone to think to influence my thinking about the future and the opportunities that we have, and the power of those lived experiences that we have yet to hear. If we can elevate those stories more so with the Latin X community, Asian deaf community, an indigenous community, when we hear the stories and lived experiences, it will allow the word to get out there to be able to share the stories within our communities. They’re so important because they’re stories about resilience and being able to make it through and to understand the whole human condition of what can change the future and what can impact one’s life. It is about hearing these stories. About hearing it will come up with better solutions.
And the second thing is that over the last year, we saw the greatest threat to our democracy. And now it’s just fascinating to think about what that means for us. Again, it’s quite fascinating. For one thing, it’s about being able to build an understanding of the importance of having dialogue and discussion and really wrestling with differences. I remember going to congress at one point and somebody had brought up the fact they heard that there was all this commotion happening at Gallaudet. I told the congress that we’re always making sausage all the time and it can be messy. And when I said that, he really got what I was talking about. And I think that’s really the heartbeat of our community too is our various organizations struggle with something, conversations where we’re talking about children or what we need to do to learn and the power of democracy is something that has allowed us to be the nation of joiners and working together with one another. When I think about our work, especially looking to strengthen our democracy because without our organizations, without that representation, our systems start to become weaker as a result. For the people who are watching the panel, I think it’s important to remember their engagement with our various organizations with being part of the community and doing the fight, it’s hard and messy and relationships can be hard, but the value is there. That’s really what keeps democracy alive. It’s really what keeps that hacking culture, problem-solving culture so prevalent within our community. Giving me the energy and the belief and the future for what’s possible. We have to talk about technology and we don’t often talk about democracy or we talk about, you know, technology without talking about economic justice or racism or audism or all the other isms. What would happen with communication together? I am looking forward to that especially after COVID. I think that’s going to be our work going forward.
>> HOWARD: I think part of the shift in politics these days has left our country more divided than ever. Many people have been left out especially black indigenous people of color and people with disabilities and then even more so people with disabilities and that divide almost seem like you’re for or against and that just doesn’t make sense. We should all be for equal access for everyone no matter your disability, your gender, and unfortunately, our country has become polarized. I don’t know how we go about having a conversation to resolve that. I’m not sure. I’m a lawyer with all of my training and law school. We know that civil rights and that process occurred to be more equal, but now I don’t see that anymore and I don’t know if that’s the path that we’ve even on anymore. We have come to a fork in that road.
>> BOBBI: I notice that the women here seem to be lifting your spirits up today, Howard. Kind of noticed that.
>> HOWARD: Sorry to be the Debbie Downer.
>> BOBBI: Right. We’re bookends. Right. I will also mention that we established the center for democracy. I am now teaching a class with Dr. Stern and Lorenzo Lewis and the class is entitled dialogue deliberation and democracy and deaf America. And there’s a person who wrote a book, a very short book they recommend that you read. It’s called width and it was written by the president of the Kettering Foundation. He’s been a student of democracy and democratic action. It’s a very short read. It’s online and free. In this, he makes the thoughtful argument about by, for, and of the people. That really what democracy means is the 80 to work with one another, to work with the people. And what we’re asking today I think is that same type of language especially when we’re going to legislators and schools, when we’re looking to be able to innovate. We really are looking for people to work with us and that has inspired some of my thinking about the importance of how we go about framing our future. You and I, Howard, are trained attorneys and we need to have systems or law and the rule of law to be able to govern and democracy. I have to tell you our systems of laws really do not work well enough for people with disabilities. They just don’t. And for BIPOC people as well. I do think that some of the heart really goes back to the supreme courted and the policy seeding that happens at that level and even with congress. We have a lot more work to do to be able to create a system where people will feel that the rule of law has value to them as an individual. My worry right now is that people will fool that the rule of law does not show its value and doesn’t have applicability and therefore, they won’t obey it. I think that’s troublesome where we have to create and build something that goes to show the value of our democracy to people and the experience.
>> ERIC KAIKA: Oft know times I share with members of our community that the best or the Ideal way to influence change is to share stories. Stories lead to data. You collect the data for organizations like us, the more data we have, the more we can justify what we’re recommending, and then the easier we can come to a resolution. So we need to engage with members of our community and listen to them. I know that also is easy to hear especially with native language or territories that are often ravaged by hurricanes. They’re still part of our country. We can’t just keep focusing on us and assume we’re doing (inaudible) and not include them. We need to bring them in.
>> BARBARA KELLEY: (speaking)
>> BOBBI: If I can bring Dr. Stern into this conversation, one thing he talks about in the class and teaches about is the importance of democracy not just being at the individual level, but you need to have the organizational level live within a democracy in order for a democracy to be effective. So you need to have the individual, the organizational, and also governmental level and that’s what creates the ecosystem for a change. It’s what the Tokeville had written about here in America. And that’s what I’m seeing here is the delivery of services to the mission that our organizations have and delivering that to the community, but we’re only as good as the community they join with us. And we need their stories. We need that data and that information. That’s the commitment that’s needed for change so desperately needed.
>> HOWARD: A final comment going back to the purpose of our panel today. Technology. Technology is one way for us to connect to share our stories, gather that data, find ways to have a dialogue to change the world for the better. But it’s only as good as the technology we’re able to use and that needs to be accessible and that can be a powerful way that we can influence it as a deaf and hard of hearing community.
>> BARBARA KELLEY: Can I ask that question of the years? What do you think as we move the needle?
>> HOWARD: Through lawsuits, yes.
[Laughter]
>> BOBBI: Barbara, I will say to that to the question of have we moved the needle, it’s fascinating to think about. If you ask the generation before us what their experiences were compared to what we’re experiencing now, they say of course you have. If we were to ask the younger generation, they would say no. You haven’t to this point. I think that’s the role of young people. Quite honestly, I think we need to listen to them. We need to be motivated to see what they can bring to this conversation.
>> HOWARD: I think that’s part of not having anything before and now having something that’s new. If you’re born with everything, you don’t need more.
>> ERIC KAIKA: I’ve had several interns mostly from graduates. They are recent graduates and some current students. And I told them I was impressed with them compared to the older generation that is. They are able to emotionalize and shake things up. It took us years to get to that point. And I will get the younger generation and they’re really in a position of power to really change things and help — change things for the better for everybody.
>> BOBBI: I will share something with you, Eric. First, I love that story that you shared and I do want you to remember too that the older generation from what I saw happen was pretty similar here is that whole value for one another. Because a person is deaf or hard of hearing, deaf-blind, there is care and value within that just because. There was that connection to each other. So we have to find those connections and the tools they use back then were at a slower pace where they had to write everything down in a back and things had to be sent through snail mail. It was all done manually back then. Can you imagine? The TDI address book was really our bible. Do you know? And now it’s so fascinating because with cell phone technology, all the different technologies that have come about, how we keep in touch, they’re different challenges that come with that nowadays, but I think what’s consistent and similar is that value for one another, that value of care and I hope that’s something we continue for each other showing value for one another that there’s that shared lived experience and that’s sufficient to connect us to be able to support one upon another. Without each other, we will lose more as a result. And I am also thrilled to hear about the stories you shared about gathering our Gallaudet. It is working with you. That’s great.
>> ERIC KAIKA: So joining and leading TDI has been my most rewarding experience. I have seen how organizations work together. We serve such a diverse group of people and we stick together regardless of hearing loss. The goal is to make sure that all of us have access and I look forward to us working together more and pulling in other organizations to grow our coalition more. Unfortunately, we have run out of time. We can probably have a second or third chat. This was truly enjoyable. Thank you.
>> HOWARD: Thank you, Eric, and thank you, Barbara, and thank you, Bobby.
>> BOBBI: It’s been so wonderful. Thank you, Eric, for thinking of this idea and inviting us to the table, and congratulations to you. And I look forward to supporting your future success. Again, congratulations.
President’s Reception
Jan Withers, TDI President
Transcript
[Start transcript
Visual description: TDI president Jan Withers wearing blue shirt and glasses with short gray hair standing in front of a solid blue-green wall.
Good afternoon! I am Jan Withers, president of the TDI Board of Directors and I represent the southeast region of the United States on the board. Now for a brief visual description…I am a white middle-aged female with short gray hair wearing blue glasses, a blue shirt and simple jewelry, standing against a solid dark blue-green background.
Welcome to the President’s Reception! Traditionally, at TDI’s Biennial Conference, we have a President’s Reception and that’s when the President of TDI’s Board of Directors provides a status report on TDI. For this reason, the President’s Reception also functions as the business meeting for TDI members.
What makes this year unique is that because this conference is virtual, I am asking you to imagine that while I’m speaking, you can see behind me a long table laden with delectable dishes with tantalizing aromas wafting your way!
[mirror image appears of a long table with four rows of plates filed with a colorful dessert]
The good news is – we have the technology to make this conference work while assuring everyone’s safety. We also have every intention of making our conference in 2023 a hybrid, asynchronous version, building on the lessons learned from the world of tele-conferences during the pandemic. And we plan to have all our Future conferences take place during the week of July 26, which is the anniversary of the ADA.
The past two years have been extraordinary! We experienced major transitions and unexpected challenges, specifically the COVID-19 pandemic. But TDI as an organization not only remained stable but also is stronger. After 23 years of dedicated service as TDI’s Executive Director, Claude Stout retired.
[image of Claude Stout appears: Older white bald male, slightly smiling, wearing dark rimmed glasses.]
Thanks to his careful planning, we had a solid foundation for his successor to build on. I am thrilled we now have Eric Kaika as our Chief Executive Officer. At Claude’s suggestion, we changed the position title from Executive Director to Chief Executive Officer to reflect current trends in the non-profit management world. Eric certainly hit the ground running and was up to the challenge of maintaining stability during the economic chaos caused by the pandemic.
[image of Eric Kaika appears: While middle aged male with black glasses wearing a white buttoned down shirt and dark gray blazer.]
The Board of Directors gained three new Members-at-Large: Tina Childress, Mei Kennedy, and Opeoluwa Sotonwa.
[images of Tina, Mei and Ope appear diagonally: (Tina) An Asian woman with square glasses was smiling at the camera. She was wearing a black and gray striped cardigan over a white shirt. (Mei) An Asian woman with long black hair was smiling at the camera. She wore a blue, scarlet, and black Aztec shirt. (Ope) An African-American male with a small smile wearing a black suit. ]
The rest of the board includes John Kinstler, Midwest region representative and vice president;
[image of JK appears: ]
Mark Seeger, central region representative and secretary;
[image of Mark appears: ]
CM Boryslawskyj, northeast region representative and treasurer;
[image of CM appears: Glamor shot of white woman with dark hair wearing dark shirt behind an emerald green background.]
Jim House, west region representative;
[image of Jim appears: ]
and Matt Myrick, member-at-large.
[image of Matt appears: ]
By the time you see this video, two elections will have taken place for the positions of Midwest and central representatives. The results will be announced at the end of this conference.
Before his retirement, Claude began the process of transitioning TDI to a more virtual agency. We no longer have a brick-and-mortar facility but continue to have a clear and active presence in the Washington, DC area due to the need to work with various federal agencies and partners. Eric has devoted his time to making TDI’s operations as efficient and effective as possible using available digital tools. One good example is “The Blue Book.” It is now digital and better able to meet the needs of our members, consumers and stakeholders in today’s world. Within the next two years, we will make the TDI World magazine digital as well.
TDI continues in its major role as policy advocate on critical issues pertaining to Information and Communications Technology. For the entire year of 2020, TDI led or signed on to a total of 29 filings across five federal agencies. So far in 2021, TDI has led or signed on to a total of 25 filings across five federal agencies.
To ensure TDI’s focus is clear and strategic, the Board along with Eric just completed a two-year strategic plan, covering the period of July 2021 to June 2023. Please visit our website to learn more about it. I know you will be pleased with what the strategic plan covers, including new mission and vision statements.
I am happy to report that TDI is healthy financially. We were quick to recognize the upcoming economic downturn and act to maintain TDI’s financial stability. Normally, TDI would have two employees, but Eric has been the sole employee. The good news is that Eric seized an opportunity and recruited six bright young interns from Gallaudet University to assist him with various tasks and projects.
[image of 6 interns appears: ]
Thanks to smart budget planning, he will soon be able to add a second employee. Of course, the one critical factor in TDI’s financial stability is the generous support of sponsors. We clearly would not be where we are today without them. Please visit our website to learn about our sponsors!
I am delighted to announce that for the third year, we have awarded scholarships to six deaf and hard of hearing high school graduates to help them defray the cost of secondary education. We are committed to making this an annual feature of TDI.
[image of 6 scholarship recipients appears: ]
Finally, I want you to know Mark Seeger will be rotating off the board at the end of this conference when his term expires. Please join me in thanking him for his wonderful service as the representative of the Central Region and Board Secretary. We definitely will miss him!
Now, it is my pleasure and privilege to introduce Jim House, who represents the West region on our board. Jim will be your host for TDI’s Awards ceremony.
End transcript]
TDI Awards
Jim House, TDI Director
Transcript
Full Transcript of yada yada
Next Generation Relay
David Bahar, Telecommuncation Access of Maryland
Transcript
>> DAVID BAHAR: Hi, everyone.
My name is David Bahar.
Let me turn off my background.
Give me one second.
Much better (chuckle) OK.
Hi, everyone.
My name is David Bahar.
Thank you so much for the introduction, Dan — Jan.
This plenary session is called the next-generation relay.
I’m very excited about this topic and having this discussion with you.
Next slide, please.
So first I want to open by asking the important question: Why do relay services exist?
I’m a historian so I know and believe in understanding why that helps us find solutions in the future.
The deaf community and their needs vary greatly, but the phone is limited to only those who can hear or speak.
And for those who can use the phone.
A.G. Bell, Alexander Graham Bell tried to develop a communication aid for his wife who happened to be deaf, and that invention became the telephone and opened a new world for communication for generations to come.
Next slide.
On the screen, we are showing a picture of an antique, a very old telephone.
And that phone was the revolution in communication.
It overtook communication in that age and time.
More than 100 years in the making, or after we still have the telephone.
So we have one big problem, though.
It relies on sound.
And someone to be able to voice.
Those who are hard of hearing, deaf, or speech-disabled aren’t able to access that technology.
As long as the phone can only use and be used by those who can hear and speak that technology leaves many people behind and it has for generations.
Next slide, please the ADA, the Americans With Disabilities Act was not signed till much later, a century later after the phone was invented which is a very long time.
Generations of deaf people grew up without being able to use the phone instead of the phone and being able to hear and speak, they relied on the kindness of strangers, friends, family, and names.
To help make phone calls.
They would write a list of their needs and would ask them, hey, would you mind making this call for me?
And they lost their autonomy for generations, grandparents before us, parents and grandparents before us and before the invention didn’t feel as unequal but now, after the phone, they did.
(Technical difficulties).
That was first used by the military, rail operators, and news to provide information at a distance.
At that time it wasn’t used for relay services but they were considered today that there was technology back in the ’30s that could handle that.
But there was a missing link.
How to have that technology connect with telecommunication devices so we came up with a coupler which you see in the picture on the right in the center.
There’s a man wearing a white button-up white and gray shirt and many of you know in the middle of the picture there’s a couple next to a typewriter and there’s also a lamp.
The coupler is a small box that the headset rests on.
That was the magic link that made it possible to use the home phone line to be able to communicate for the deaf community.
Isn’t technology amazing?
Next slide, please.
Now, as TTY spread throughout the community, which actually happened pretty slowly, that led to the creation of volunteer relay services.
Technology made relay services possible.
A few states set up relay services.
They were trained caller assistants/operators to relay information through the TTY.
They would listen to the hearing person and type back and forth with the deaf person.
And they would use their sense of hearing and speech to facilitate communication.
And that was quite a leap in creating autonomy for the deaf community.
And that was before the ADA, in the ’80s, past.
And not a large amount of deaf people had a TTY in their home.
But the policy, the ADA, once it was signed, led to open doors for the community and we really wanted to establish a nationwide TTY relay service.
365 days, seven days a week.
And the ADA allowed that.
And the idea, the concept of functional equivalence which is an equal playing field.
And the pillar needed to support the community was that service.
To be able to make a call any time that was functional equivalence.
So relay services actually, so if you’re using relay service, if there’s a problem with the phone and you don’t have to rely on other people to help make those calls, you become independent and autonomous.
That autonomy is great but it’s not complete.
Because how do you make that call if you don’t have parity?
There was still a gap in equity.
Let me give you an example.
Those who use a relay and another person who doesn’t use relay want to call that person who does use relay.
They will need to call a relay person first, connect with them, and then that relay caller will connect with the other person.
Which takes up a lot of time.
Usually, if there’s a peer-to-peer connection with a hearing-to-hearing person but with relay, you didn’t do that you had to connect with a relay person first and you have a third party in your communication.
So it didn’t provide an equitable experience on the phone as hearing people had.
Once VRS was established, we had the same issue.
It didn’t have parity because VRS, as it was a new technology, the great and wonderful service, you needed this wonderful, amazing thing called the Internet.
The VRS back in the day was not a pretty sight.
If you remember that experience and what it looks like — let me see if I can show you — next slide, please.
There were those who hadn’t actually — they haven’t had the unique pleasure of experiencing VRS back in the old days.
You had a videophone and when someone would call you at that time the videophone didn’t have a phone number, there was nothing attached to it like that.
They had an IP address associated with it.
So if I needed to call somebody, that person must have a videophone and I had to ask what their IP address was and that’s how we would call each other so we used our IP addresses for phone numbers which were not great.
The problem with the IP address is it’s always different.
Instead of having one phone number, the IP address, every time you connected, would change.
And it was quite a mess (chuckle).
If you think about that, as a hearing person, making a phone call and wanting to call that person, you connecting with them, it’s easy to call their phone number.
But if they had a phone number that was different every time it wouldn’t work so that’s where we get unequal access.
So you would have a problem when calling a relay person, you might have several different numbers or IP addresses that you would have to use.
So for a work phone or mobile phone, you can call anybody, like nowadays through Zoom I can call in and it’s the same number.
I can text somebody and it’s the same number.
Now we are all on the same playing field.
But for me, when I was using the relay service, it depended on the individual.
I had a videophone in my office and a laptop set up at home.
So — and a Blackberry.
So, again, no parity there.
If my e-mail signature included all of the phone numbers, it would be a dictionary (chuckle).
Next slide, please.
>> There is no equality for dialing and it’s been a negative experience for those who use that emergency services I mean, 911 has no way of getting a callback in the relay in those times and if there was a videophone making calls back and forth, we had to connect out into a North American number plan which had an address which would connect to 911 but that’s not where I would be.
The videophone number would not link to where I was.
There was no phone number and information documented with my 911 information so when you’re calling 911 had no idea where you might be calling from.
Our solution was to establish a database.
With that database, we will merge the attorney number system information with the Internet information.
So there was a bridge.
As an example, we would have a phone number, you know, with a series of numbers, and that would belong to your videophone.
That number then gets boarded into the database and it has your sim card or your address for that phone number connected to it.
That’s all we did, it was very simple, it was very bare-bones functional and it was heavily reliant — it was reliable and made a huge impact.
Finally, openers of VPs could make calls with a phone number — a phone number, not an IP address.
Because of having that phone number, we were able to connect it with a physical location.
So that 911 experience improved greatly.
That’s where we can see policy leading to solutions that have a great impact on the community.
I’m going to walk you through the issues of maybe not being able to hear or speak and the approach we have had and how people have made and received phone calls so done that.
I would like to mention one more important part which the future of relay services is relying on this thing that I will be sharing with you.
The reason I want to flesh this out for you is that I feel like relay services want to have an equivalent experience and an equivalent dialing system.
We need to have that without having this discussion we can’t have control.
In the past in the time when Alexander Graham Bell was creating his company I put a picture of the logo on the screen here, this was the first company to provide phone services for most of the 20th century, really.
They had a monopoly.
So they would be able to call each other but if you were a user of a different company you could not make phone calls between companies.
This is where the idea of interoperability starts, that any phone call can be made between any company.
That’s what interoperability is.
Next slide, please, thank you.
Now we’re jumping to 1996.
So the Bell Phone Company was split off for antitrust reasons.
So there were seven different, now, independent companies made from that one company.
And it happened that there was the same problem.
They still couldn’t connect with the smaller phone companies.
So Congress did take action and the result was the Telecom Act of 1996 which requires interoperability with any different and all different carriers.
The phone numbers could be passed between each company.
If a person jumps from company to company they would not lose their phone number.
This was a big change.
1996 doesn’t seem like that long ago.
It seems like it was very recent in our history but, I mean, I was a senior in high school.
I had an AOL e-mail address.
Another thing that happened in 1996 was that the Internet really took charge.
Next slide.
It’s — I can’t create enough emphasis on how the Internet has completely ingratiated itself into our everyday lives.
How the Internet has changed telecommunications.
There are not enough words.
I can’t overstate the importance of that.
In the beginning, when we had that plain phone to what we have now, it’s a new frontier of telecommunications.
And the companies and options that are out there for us are endless.
I’m not sure if you remember the old days, but you can see on my screen, on the left, I have a photo here if anybody remembers this technology?
This is Windows, and this is what came with Microsoft Windows back in the old day.
It was a free video chat app.
This was the early 2000s.
It was a very new technology at that time and it was becoming more and more frequently used.
This was a kind of new relay service, we would have this.
And communication information started being transmitted more quickly.
It was wonderful, yeah?
Next slide.
Technology that is available today is amazing and there are so many choices but we’re still catching up.
ADA was passed in 1991, we got relay, we got the Telecom Act in ’96 and then interoperability for the phone network carriers.
So if you wanted to save money and use a cheaper carrier, you didn’t have to be concerned about not being able to call other companies.
It made it very easy.
And it took that worry off your plate.
But soon we recognized that the old issues were still showing themselves.
And we were still having that same negative issue as we had before.
The baby bells, those other companies trying to call outside companies, those issues were still coming up between Internet, the rise of the Internet, relay companies started being competing for customers.
So they would lock out their video systems.
Again, we couldn’t make calls between carriers.
And, again, the policy needed to come into play to address those issues.
So the FCC didn’t institute an interoperability requirement so that the video relay system would be equivalent to what hearing people used as in we could use any carrier that we wanted for our — VRS services.
Even today with VRS, it lives in a bubble.
We have a device and they have to work with each other but they don’t work with the larger devices out in the world and we need to have a solution for this.
Luckily, we believe that we have already found this solution.
Next slide.
So the issue that we have is not being able to make calls and receive calls with our videophone number, it’s very important.
The FCC did give their counsel to ANC and they’ve made recommendations on several things because they did study the problem, about not being able to make these and receive calls with videophones with a phone number.
So we did identify interoperability in the North American Council workgroup so they did research on how to make interoperability for videophones possible using a 10-digit phone number.
And it’s my honor to cochair the work of those groups for two terms.
Matt Hurst from CTI was a co-chair for the first term and then Chris W. from Comcast was a co-chair for my second term.
Next.
So that working group had discussions and thought about how to structure the phone system so that you were able to support more than just audio calls, adding other features like video, videophones, and other things and having — we have to contribute and work into that.
Intelligent people from technology, phone companies, Internet industry, all came together to have this discussion and brainstorming ideas to figure out how to solve this problem.
And the approach was to choose the same concept of the directory, having a phone number with a location for videophone, that same concept that we have with the IPs and location that we had before.
And having that — but we have the issue now about services delayed, dialing equivalence, interoperability, all of those will be resolved by having this database.
That database will support audio calls, videophones, texting, real-time text, I will touch on that more later, but real-time text, we’re already recognizing that as a feature that is replacing TTY.
And it’s important that the database makes it possible for VRS callers to directly call 911.
That has not been possible and is not possible currently.
Next slide.
IVC governance is a working group that is standard, which has consent on how to move forward.
The first thing we bring up is, as many of you, I suspect, are wondering the same thing, is who is paying for this?
And how are we paying for this?
Interoperability video configuration IVC, I mean, we’ve had discussions in the past, directly makes it possible for that 10 digits — that database makes it possible for that 10-digit number to connect to a videophone, we’re going to be using that model and looking at that for IVC and also possibly in general for VRS providers to control the calls that are coming.
We don’t really want that.
Hearing people make direct connections without that.
And so our carriers will have to give up that control as well.
With regards to who’s paying it and who is paying, traditionally it was the FCC who had a fund and had a separate structure for paying for VRS services.
IVC will also need a different system.
We cannot borrow from that system that we have currently.
People want governance methodology.
That GA is that anyone in any company who wants to can provide IVC combinations for interoperability phone conferencing they can join that GA and they can not to the — contribute to the cost of operation for the whole GA and members who join, then, have to make this agreement and voluntarily make this agreement to provide interoperability video.
So anyone that has a product from one of these companies who was providing this IVC we feel confident that they can make calls to any other platform or product using IVC so that is the equivalent interoperability concept.
Next.
The second thing that the IVC working group recommended was a distributed database.
This means that all information is shared with the different members.
We do that because each member who wants to connect to the GA will be setting up their own copy of this database which really literally would be contributing to the operation of the database.
The database, then, will include lots of tidbits of information about the phone number, like preferred apps for video calls, what kind of relay services you choose, all of that information will be used to navigate and route your call to the appropriate place behind the scenes without having to make all these separate connections yourself.
Also, that will solve one of the largest challenges that have been recognized with the relay experience which is a hearing person calls in, they figure out that they have connected to the relay and they hang up.
So receiving calls as a person who uses relay has been challenging.
One of the things the SEC asked us to do for some homework was this, and they were very specific, and they asked us to look into how to reasonably use the database for 911 calls.
And what we found is that when you make a call to 911, the experience is much improved with IVC.
And the database will have all the information about the relay services and the company will use that.
You know, again, which app you prefer to use for default, and if a connection is made with 911, you will directly be working with 911.
You don’t have to go through relay first.
The IVC, then, the operator will be pulled in after the connection is made to 911.
Another part of the dialing experience equivalence is the phone has lots of different things, I can explain, again, I have my laptop, my videophone, and my phone number as separate numbers.
I can now have one number.
I have my phone number.
I can use it on the IVC database on my app so I can text with a person, I can make video phone calls, I can call into a hearing person as well.
That is an equivalent experience to what a hearing person goes through.
Having the one phone number.
Technology really is changing very quickly, and we have lots of different devices that we couldn’t even imagine a couple of years ago.
And we have them now.
HD video, walking through your screen, relay on mobile, you know, live interactive devices are recommendation for IVC is that — our recommendation for IVC is that we recognize the work of the people at NTDRD and are working on their projects and people who are working on separate projects both are building on the groundwork for the governance communication.
We’ve come a long way from the services we had run on the old-fashioned computer lines.
And one more thing I wanted to mention before I close out here.
RTC really is a text over IP.
That’s real-time texting.
And it does set the standard for the international industry and really, it’s a modern version of TTY communications, honestly.
It’s the same as TTY.
I will tell you it’s not like TTY, I don’t want to say that, but it doesn’t work on just — it works on the Internet.
TTY did not.
Sorry.
Getting those things mixed up in my head here.
So, yeah, modern texting is on the Internet.
And it is like an unlinked TTY.
You can have the audio going at the same time, video going at the same time, data running at the same time.
The FCC really has laid the groundwork for the future of communications based on RTT, what we had maybe five years ago, and how we allowed the wireless connection.
Because previously it didn’t support TTY on wi-fi.
Having RTT instead really released it for all.
There’s no way to put TTY on the wi-fi.
So having that now and that new requirement to support our real-time text, RTT has made a lot of things possible for relay services into the future.
Different modalities now can intermesh and intermingle in one call.
We can have audio, we can have text, as an example, in one call.
The old relay of VCO, IP addresses, all of that can now work together to support one technology moving into the future.
RTT is very exciting.
It gives us a glimpse into what the future of relay might look like that we’re discussing in your breakout groups which will happen right after this session.
The NG911 also supports RTT which would mean RTT can be used for direct communication calls to 911 and also talking, texting video with 911 simultaneously.
Isn’t technology wonderful?
But technology isn’t everything.
For real functional equivalence, for mental health, for the function of the communication system, it needs to be inclusive.
This means everyone who uses the phone would be on equal footing.
Technology does not create that.
It is not enough.
After all, the first VP was in 1936 in Berlin, you can see that, it’s on the center-left image, that’s what a videophone looked like 85 years ago.
The picture on the right, is a videophone that was developed later by Bell Labs later in the 1950s.
Those two pictures, one 85 years old, one 65 years old, really were a glimpse into our future.
We just haven’t had the policies to make it happen.
Thank you so much.
If you have a desire to learn more, please feel free to reach out to me, David.bahar@maryland.gov.
The breakout plenary R3, VRS, IPCTS, and deafblind.
I encourage you to join in one of those.
If you wanted to join VRS, please, just stay right here, IPTCS or Deafblind, hop on to your other link.
Thank you so much.
NGRelay: IPCTS (breakout)
Nathan Gomme, Eliot Greenwald, Lise Hamlin, Dixie Ziegler, and Tina Childress
Transcript
>> TINA CHILDRESS: Hello and welcome to this breakout session next-generation relay caption telephone.
My name is Tina Childress and I will be your moderator today. I am a brown-skinned woman with black and white shoulder-length hair wearing colorful black frame glasses and a white cardigan with will TDI bi-annual conference logo in the corner. It says TDI 24th bi-annual conference reset and reconnect in blue text and hashtag TDI CONF in white lettering. I feel so privileged to be one of the newer members at large at TDI and I have learned so much being a part of this organization. I am an audiologist by trade and also a late-deafened adult who uses bilateral cochlear implants. Using technology for accessibility is my jam and I enjoy learning, teaching, and creating resources about it. I’m choosing to speak for this panel since I’m going to be navigating different windows on my computer, but I’m also fluent in ASL. I am so honored to welcome our esteemed panelists for today. Nathan Gomme, Eliot Greenwald, Lisa Hamlin and Dixie Ziegler. I will ask them to introduce themselves and give a visual description. Throughout this panel, I will call on them when they raise their hands to answer questions, and then they’re going to pause before speaking or signing so that people are aware of where to look.
So without further adieu, let’s go ahead and start with Nathan Gomme.
>> NATHAN GOMME: Hello, everyone. I am Nathan Gomme. I am also the vice-chairperson for you in RAFTA and that is the national group of states, state relay administration. And just to let you know about my appearance, I am a blonde, tall, Caucasian male, my background is a gray stone wall. I’m happy to be here as a panelist discussing this topic.
>> TINA CHILDRESS: Thank you, Nathan. Okay.
Next, we have Eliot Greenwald.
>> ELIOT GREENWALD: Yes. Good afternoon. I am Eliot Greenwald. I am the Deputy Chief of the Disability Rights Office at the Federal Communications Commission and let’s see. I have brown hair but nothing on top. And —
[Laughter]
And I’m wearing a, ah, I guess it’s a gray, dark gray suit with a blue tie and a white shirt.
>> TINA CHILDRESS: I should — okay. So I created this mug and I’m showing it to myself right now. So thank you, Eliot. So next we have Lisa Hamlin.
>> LISA HAMLIN: Hi. I’m Lisa Hamlin. I’m the director of Public Policy at the Public Hearing Loss Association of America. I am a white female with graying, brownish hair, blue glasses and I am wearing a beige suit — actually a jacket, and in the background is my home because I’m still working some of the time from home. Thank you.
>> TINA CHILDRESS: Thank you, Lisa. All right. Next last but not least, we have Dixie Ziegler. So go ahead, Dixie.
>> DIXIE ZIEGLER: Good afternoon. I’m Dixie Ziegler. I am vice president for Hamilton Relay. Get to play in lots of different areas in relay really excited to be here to talk about IPCTS this afternoon with these wonderful other panelists. I am — I guess I’ll have to admit, middle-aged female and wearing a blue jacket with silver gold glasses sits in front of a background that’s a desk that you can see with a picture above my shoulder that’s an old bell office. So seems appropriate as we talk through some telecom relay-related issues this afternoon.
>> TINA CHILDRESS: Thank you so much, Dixie. All right. So the goals of this session include creating a vision for the next generation of relay services as well as understanding current technology and policy limitations and consumer needs. So this topic can go in many directions. So please start thinking about questions that you might have. You can ask them in the Q&A box, but I’ll also be monitoring the chat window. All right? So let’s begin.
So here are some warm-up questions that we came up with. So do you anticipate Smartphones replacing desktop caption phones? And what do we need to pay attention to when making the transition? So, does anybody here want to go ahead and start with that? So I’m going to go ahead and call on Nathan. Go ahead, Nathan.
>> NATHAN GOMME: This is Nathan here. So yeah. I think we’re good. We’re on the way to transitioning to Smartphone technologies and using that with everything, all the different kinds of tools we have will be in that. So if you’re using an Android device, for example, you already have live transcribe in that. You have the ache to add — the ability to add captions to your pilot. You can add that to various tools that you have — I’m sorry. Podcasts. Excuse me. Yeah. You can add that to the different things that you use daily. You also have Apple phones which also have various technologies. It doesn’t have the same captioning technology as Android. It’s not on the same plague field, but they still have a lot of integration within their system. For example, an iWatch, um, can be aware of the surrounding noise and provide directions and feedback on those devices. That’s a great example. So the two different systems Apple and Android, means you don’t have a similar experience with each device. So that’s one drawback that we have seen so far. Maybe as technology keeps coming out, they might be on the same playing field and provide the same accessibility. I’m thinking that’s where we’ll go. Smartphone technology is not 100% accessible for low-income individuals, those who live in areas where they don’t have a good reception or services. We’re not there yet, but we are getting there. The FCC is building that infrastructure out and providing those tools more and more. So I suspect in a few years, we’ll see more and more opportunities coming along with mobile devices and different technologies there.
>> TINA CHILDRESS: Okay. Lisa?
>> LISA HAMLIN: I think Nathan is absolutely right. I think we’re ready to move in the direction of Smartphones and I could see a day when we have the relay on Smartphones. But I’m here to tell you I still get requests for TTYs. There are still people out there that depend on older technology. So while we may see a movement towards Smartphones and more and more adoption of relays through apps, I expect for a while there will be a transition time. There will be some time when people will be holding — as Nathan said, some areas don’t have access to broadband. Now it’s a problem. But also older people who feel just more comfortable lifting up a phone than they do learning how to use a Smartphone. So I think that transition will take some time.
>> TINA CHILDRESS: Thank you. So next is Dixie. She had a comment.
>> DIXIE ZIEGLER: I was really going to add a little bit to that. I agree. I do think that a key critical point here is to provide IPCTS captions specifically where customers want them and where those who need the service need them. So, you think, whatever the screen is, wherever that might be, whatever setting that might be in, whenever it’s needed, now, grant it I recognize that there are some boundaries inside the — inside the program itself and appropriately so. But it’s important, right, that we continue as we drive toward a vision that we continue to really find the places where consumers need to be able to access the service and ensure that accessibility is there.
>> TINA CHILDRESS: Yeah. And so it’s not just accessibility in terms of things like prepping and all of that. It’s also for our consumers that are blind and have low vision are the technologies there for things like that. So any other comments? Yes, Nathan.
>> NATHAN GOMME: This is Nathan here. I want to follow up with the deaf-blind community. Interacting with them, a lot of them struggle with Android devices compared to Apple devices. So that is something that needs to be solved and addressed. I think one key point that Lisa mentioned is comfort with different technologies. You know, some people think it’s great, but they’re not interested. They want to keep the aim old thing they’re — the same old thing that they’re used to. Many years ago, they might have been using that phone and just got used to it and want to keep using it. But now, they might want to switch to a new device, but they because they’re used to that. Individuals are not interested. They want to keep what they know. So working with the deaf-blind community is something we need to continue to do.
>> TINA CHILDRESS: All right. I will go ahead and move on to the next question. So how is the FCC addressing the increased use of ASR to make sure that it is providing effective communication?
>> ELIOT GREENWALD: Thank you, Tina. This is Eliot speaking. And, ah, basically, you know, we start receiving applications for automatic speech recognition technology with IPCTS a few years ago and we’ve been in the process of reviewing them. But to back up for a minute, right — there are basically different ways that IPCTS is delivering. Traditionally, 4 of the 5 providers who have been around for a while delivered it using a — basically using voice recognition that they would have the communications assistance listen to what the person on the other end was saying, revoice that into an ASR program that was trained to their voice and they would make corrections as they saw the ASR program making corrections. I would see that as quite useful during the early days of IPCTS when ASR programs were not as good as they are now and particularly needed to have a voice trained to them with some significant training of a person’s voice.
One provider was providing IPCTS using stenographers, the same way that TV captions are provided. Now, then the FCC issued a declaratory ruling saying that parties needed specific — providers needed specific authorization to provide IPCTS using ASR only without the assistance of CAs. So for the rest of this discussion, I’m going to shorten this to say ASR only versus CA assisted IPCTS to distinguish the two.
So what we did was as parties filed applications and we were row viewing their applications, we were, of course, concerned about the quality of the ASR. So, um, we require that parties do testing of their ASR-only service to make sure it was comparable to or better than what the CA-assisted IPCTS is provided. And all the providers who we’ve — who we’ve authorized so far and we’ve authorized two new providers to provide ASR only IPCTS that would be munch genius and clarity products and we authorized two previously existing providers who provide CA assisted that would be caption and clear captions. We tested — we had them ail tested by Miter who’s a contractor working by the FCC. As a result, Miter has no vested interest in any outcome. And before we granted any of those applications, we were able to see that with all four of those providers, that the captions that the ASR proposal had was — were either comparable to or better than CA assisted captioning. And but we also noticed one thing which is under most scenarios, ah, ASR-assisted IPCTS teams have better accuracy. It’s always better for caption delay. Under certain scenarios, we have found that the captions with the ASR only are more challenging, noisy environments and where people are otherwise speaking in the background seem to be the two scenarios where the testing was more challenging, but overall, we saw much better results. Our view was that consumers should be given a choice. Some consumers prefer ASR because of the privacy involved. There’s no CA involved and prefer them. We also made sure that and we require because of our rule that the — that the captions cannot be saved by the provider beyond the duration of the call. So, therefore, the ASR engine can’t preserve those captions either beyond the duration of the call. So providers were supposed to represent that they have that agreement with their engine provider to make sure. Now, what I think is very interesting is we have not received a plethora of complaints from ASR users. The complaints are no more different from CA assisted in terms of quantity and types of complaints. And but what’s even more interesting is that — and I’ll mention the two providers that offer a mix. ENO caption allows their users to choose their default whether they want ASR only or CA-provided captions. And, um, and when I last checked with ENO captions, a substantial percentage of their users asked to keep the exact number confidential, but a substantial number of their users have chosen and used ASR only and in fact, their users can even switch dishing a call F — switch during a call. If they divide, they can be CA provided if they’re unhappy or the other way around. They can switch in the middle of a call.
And then what’s very interesting is that clear captions and this is public information because they filed an ex PARTE met with us last week and they are now 94% use of ASR and only about 6% use of CA. They are maintaining CA for the scenarios I mentioned and for other scenarios. They have basically an algorithm and determines whether ASR or CA assisted is providing according to the algorithm of more accurate captions. So I find that very interesting. We have not received any unusual number of complaints from clear captions and they have told us they have not received any unusual number of complaints regarding ASR. So I think ASR is here and we have recently received two applications to — from — one from Hamilton — well, actually, before I get to that, we do have long-pending an application for caption calls providing ASR in addition to CA assisted and we’re hoping to be able to issue an order on that in the near term. And then we recently received two applications. One from Hamilton, which we just put on public notice for comment in the comment period is still open. And just yesterday, I think it was we received an application from T-Mobile. So it hasn’t been on public notice yet, but we should be putting that on public notice for comment soon.
>> TINA CHILDRESS: I’m on mute. That’s fascinating about ASR not getting as many complaints as CA, not an unusual number. Thank you for explaining that and letting us know about those things. Dixie, I saw that your hand was raised. You’re on.
>> DIXIE ZIEGLER: I really appreciate all that Eliot shared. That was very, very good, smart, accurate information and I too think that, um, allowing choice and the FCC has made that possible for consumers to pick what’s best for them, the flavor that they like, the provider that they want to use to provide these services. So I think that all of the things that come from that, competition and insuring that choice remains for consumers think are high policy needs that the FCC always is and continues to spend time on in a variety of ways whatever those ways may be to ensure that there is a good choice for consumers. I know there’s been some — I haven’t check up with all the chat, but I did see someone ask some questions. Someone posted an article about bias in ASRs. And yes, there’s been quite a few studies about the bias in ASRs which really speaks to why choice is so important to ensure that consumers have access to IPCTS no matter what the call condition is, no matter who is on the call when that call needs to take place and that high quality — however the service needs to be provided whether with the CA or however that might be to ensure that high quality is available to all IPCTS CTS users. So, um, I think there’s another question about word error rate somewhere in there and I do believe that MITRE is using a word error rate. But that is something the commission has an open item and maybe Eliot will spend time talking about that to develop even stronger and more defined quality metrics for IPCTS.
>> TINA CHILDRESS: Thank you so much. Does anybody else have any comments? Okay. Lisa?
>> LISA HAMLIN: Um, I think what Dixie was saying about consumer choice really makes — is really an important piece here. It really consumers see a need for competitions of T. each has its own flavor. Each has its own style. I will say as a user of IPCTS, I have seen that no matter who provides the service, there are also going to be some differences. There are also going to be some mistakes made. So what I have seen that works well and I wish everybody would pick this up, but again, they’re going to be dim strokes for different providers, I would love to be able to see switch become and forth. I have had conversations where the ASR has been spot on and there have been conversations where if it has an acceptance, a person has an accent that it does not recognize, the ASR is out the window. So then be able to switch over to a CA is wonderful. Likewise, there have been cases that aren’t also at the top of their game. So being able to switch to the ASR has been really useful. So that’s something I would love to see as a baseline. But if not as a baseline, I would love to see providers say this will give consumers something because nobody is going to be 100% all the time. It’s just not going to happen. We’re human beings and even the ASR is not a human being is still going to make mistakes. I think having that kind of choice will really help consumers get a kind of functionally equivalent call that they really need.
>> TINA CHILDRESS: Okay. Nathan?
>> NATHAN GOMME: All right. Just make sure the interpreter can see me. And this is Nathan speaking. So ASR is an interesting topic. I think it’s one of the most interesting things we’re talking about. We talk about the technology behind it, the machine learning, the neuro language learning, and all of those things that are behind it, all of those things play important roles in how effective each of those things in ASR is. So you can use five different ASR technologies that have five different experiences. You can use ASR. You can use one. You can use one for (inaudible) to recognize your speech. You can use one to comment on other things. So the system would learn you so well, learn your voice so L. so any kind of jargon that you’re using, the ASR would pick that up fine. But then other people use it, it would — the accuracy would really suffer. Like Lisa mentioned, accents, how they enunciate things, and how they phrase things, you can see the accuracy.
Also, about how ASR is the server being used for the ASR, if it’s a good server and stable, the connection is good and strong, then the ASR will work fine. If there are lapses and disconnects or the signal drops, then I heard stories about different people, you know, the ASR is really saying things that are not in line with what they’re saying. Again, this is based on their technology. The experience you have with the technology and your technology situation. So if you continue to use ASR, it will be critical. It continues to learn how we speak and phrase. So that is something to watch out for. And also, you know, there is a variety of different answers. They’re not all the same.
>> TINA CHILDRESS: So I have a question for all of you. There have been some comments, but what about people that want to use IP relay? I saw that (inaudible) was make something comments about the sim. I think what we’re talking about really applies to people that use their voice to communicate, but what if you are using IPCTS and you don’t use your voice to communicate? It’s difficult for the system to understand. So how do you see technology improving or addressing this issue? Dixie? Thank you.
>> DIXIE ZIEGLER: You bet. So IPV relay is a service that is available. There are — I believe there’s only one provider offering that service today, but the commission continues to take comments. The fact that maybe Eliot wants to talk about that on some of the rate issues surrounding IP relay I think is certainly motivated from their receipt proceeding to insure that IP relay is widely available and available to those who really need the service.
The only other thing I would add to the mix for those IP relay for those who aren’t able to use their voice, um, David really started to hint towards this at the end of his presentation as far as real-time text communication and the level of RTT that’s now available that whole texting, the base I text-based experience for users and I think David was right that we’re on the cusp of that. I know that Hamilton is in the middle of some trials right now, some wireline trials. RTT is available on the wireless side. We can use this whole time just talking about RTT, I’m sure. But I do think that there are options available today for those who can’t use their voice and I think there is going to be even more as we move into the future. I don’t know, Eliot, if you have anything you want to add to that.
>> ELIOT GREENWALD: Yeah. It is really IPCTS and IP relay have two different purposes because IPCTS is designed for people who can speak with their voice. IP relay is for people who need to use text going in both directions. So that’s — so IP relay — and Dixie is correct. One of the reasons that, you know, it is a separate current panel going on in IP relay, but one of the reasons why the FCC is about to vote on next week on an IP relay notice rulemaking on compensation is that we need to basically fix the compensation for IP relay with the methodology for that. Mostly because the compensation methodology for IP relay was developed 13 years ago in an entirely different market where IP relay was a much larger service and had a fair amount of competition and unfortunately had a fair amount of fraud. And it was actually — that wasn’t the only reason why IP relay is less used now. That was a major factor, but the other factor is at that time in 2007, there really wasn’t a lot of VRS mobility. And so that too has decreased the demand for IP relay. But the people who use IP relay need that service and can’t use another form of TRS. And because the other forms of TRS just won’t give them effective communications. Even though it’s a small service among the FCC staff, we consider it to be a very important service for those that need that effective communication.
>> TINA CHILDRESS: So more interesting comments in the chat. You know, I look forward to a day where we can choose whatever works for us, right? When we think about universal design. So that communications with hearing assistance, it doesn’t matter if you want to use IPCTS, VRS, or IP relay. Do you see that coming down the road? Is there a new technology that’s out there or something that can make this all kind of seamless? Thoughts about that. That’s my wish list. And this will be the last comment because we have to shut down.
>> NATHAN GOMME: Okay. This is Nathan speaking. So I would love to see happen. There are so many potential different technologies out there to use and to know which one is the best right now. Several trying something out, seeing if this works better, and trying to still figure this out. So CSD has the direct link for VRS communication. That’s great. You can Connect. Speak to someone directly, which is wonderful. But not everyone uses that. For example, if I call Delta Airlines. So it doesn’t work well and I move into the chat for Delta and their AI, you know, missed a lot of the comments I was making and misunderstanding. Am I clear and am I speakings with an agent and they’re talking about the Clear Program for using accessibility and I had to back out. No. No. No. There was a misunderstanding. I wasn’t speaking with a live person. I jumped on Twitter and tried to actually interact with someone through Twitter. So, you know, you have to trial each of these technologies and find what works best. So I wish we had dimple buttons that we — different buttons that we could have that, but we don’t know which one works best. So there are different contracts with different companies out. There so we need to educate them and find out which ones work best and the companies need to buy into that system. We need to educate the community that these options are out there and then we can work from there. And we can’t rely on one size fits all mindset. We’re talking and think about rural areas and Native American communities. Then we need to be able to use, you know, IPCTS or CTS. So we need to ensure that companies have a way to connect with each person and find their needs and use (inaudible) on the back end so that where we can Connect with us and I think that would be the next place we’re going.
>> TINA CHILDRESS: So I am so sorry to cut this short. I think we could have gone on for another hour. So thank you so, so much. So feel free to connect with us on our LinkedIn profiles and ask more questions. Thank you to the attendees that have come today and we will see you in the next plenary section. So thank you very much. Have a good afternoon, everybody. Thank you very much, panelists, for your wonderful comments. I really appreciate it and thank you, communication facilitators. All right. Goodbye.
>> ELIOT GREENWALD: Bye.
>> TINA CHILDRESS: Thank you.
NGRelay: VRS (breakout)
Zainab Alkebsi, Spencer Montan, Lance Pickett, Michael Scott, and Jim House
Transcript
>> JIM HOUSE: Okay. Michael Scott, attorney, advisor, at the Disability Rights Office, and at the FCC. I hope you can come on screen really briefly.
Okay. Hi.
>> MICHAEL SCOTT: Hi. Good morning to you, Jim. Or late morning, anyways.
Sorry, it took a moment to get the voice interpreters so that I could hear what was going on.
>> JIM HOUSE: My name is Jim House. I am the TDI board member representing the West region. And I am the moderator of this panel discussion.
I hope you will have several questions and maybe if time allows at the end, we will have some questions from the audience.
Using the Q&A, just type in the questions there, and we will monitor that. Okay?
So here today we’ll be discussing video relay service. We’ll talk about the issues in the past with barriers and sound for the perfect system today. What is your envision of the perfect VRS? We’ll start with Zainab.
>> ZAINAB ALKEBSI: Hi, this is Zainab. Would you like me to continue or should I wait for the interpreter?
I’m concerned about full accessibility.
>> JIM HOUSE: Okay. We’ll just go ahead because we have the voice interpreter, and then we have the captioning, and then we can go ahead and sign, and then hopefully we will be — the interpreter will come in in a few minutes.
>> ZAINAB ALKEBSI: Sure.
Happy to kick this off. My name is Zainab. That is a great question.
In thinking about where we are now with the video relay service system, and where we need to get to, and what’s missing.
For me to envision the perfect VRS system, a system where we have communication equity as other users of telephone systems, that means the deaf user experience must be the equivalent of the non-deaf user experience. Not close enough or almost that. No. It must be equivalent, which means we need a world with robust competition and innovation and we’re not there yet.
To be honest, right now, VRS is hit or miss. And honestly, I like the rest of VRS calls, it’s a gamble every time I use it, and we still don’t have the skills-based routing, and that’s — we’ve been waiting a long time for that.
VRS service is stagnant. We have little to no technological innovations because it’s been stagnated by rate cuts. They’ve been focused on fraud, waste, and abuse, FWA on the process. Fraud, waste, and abuse were in the past. It’s time to move on. It’s time to prioritize equivalent equity. It’s a must.
Then also we have to consider the need of deaf and blind and deaf with additional disabilities and right now VRS is right now accessible for many DeafBlind and deaf plus individuals.
So a perfect VRS means VRS that is truly accessible for all of us in the community, not just some of us.
And then also a perfect VRS is where it intersects and factors that into the equation. We have to expand the pool of diverse interpreters. Most of the interpreters are white, and they don’t necessarily fit in with my background or my culture. They don’t really fit with my voice, you know? So we really need to expand the pool of VRS interpreters.
And then also part of the vision of a perfect system is where we have no delay in signing up to receive the service. Right now it’s a very cumbersome experience, and in order to receive VRS service, we have to go through somebody that oversees it, and sometimes we have to wait a few days or weeks. Is that equivalent? No, it’s not.
A perfect VRS means that we can use captioning to verify the interpreter’s voice and accuracy, rather than just assuming everything is going right, or trying to lip-read what the interpreter said, and sometimes it could totally be missing the mark, so we don’t really have that.
There are critical gaps in VRS service, and to multiply that by COVID, at a time where people need to connect, they need that connection more than they ever needed it before.
So just to wrap this up, someone said something about last week about that they’re really — that really struck a chord with me. That person said COVID accelerated the need for teleconference, and it took away a lot of video technology by five years.
So from that to one year, wow, wow! Where is that level of innovation for VRS? Where is it?
So that’s where — what we really need to figure out, how to — how do we move from the past and focus on the future and how to get there. We need — so that’s why I think this is a great discussion, but we need not just to discuss, but take action.
So those are my thoughts about what a perfect system is.
>> JIM HOUSE: Okay. Thank you for your points, Zainab.
We’ll be addressing some of these later in the discussion, but right now, Spencer Montan, do you want to make any comments? Anything you want to share?
>> SPENCER MONTAN: Sure, thank you, Zainab, you mate a lot of great points now about the issues.
Myself, as a deaf person, I tend to have a few phone numbers. I know I have a text number. I have one for personal, one for business, one for work. So I have several phone numbers, and it’s really tough for me to manage all and try to remember them and get out the right one, so I’m trying to figure out how do we reduce the number of phone numbers to make it easier to recall and remember? And that makes it easier to connect with my mailing address with the people in my contacts, as well as my video contacts. Everything is separate right now. How can I manage all of these contacts? I missed one call, then I missed the other call. And I think the perfect vision of a VRS is probably to improve the geolocation for 911.
The reason is that I would log in, I put out my home address, and okay, but if something happens and I’m in an emergency at the food store or if I’m in a car accident, how is a person — how would the emergency personnel identify and recognize my location? So that’s why I think geolocation is very important to address and solve that issue right now.
And then, you know, another — yeah, so that’s what I think would make VRS a perfect system.
>> JIM HOUSE: Thank you, Spencer. And now we’ll move on to Lance.
>> LANCE PICKETT: Hi there. This is Lance, signing right now. My sign name is Lance.
So I think Zainab really knows. She answered a lot of these things very perfectly, about the world of what a perfect VRS would look like. And I agree with everything she says. Everything Spencer said too. So I just want to add my imperfect answer to a perfect VRS and what that would look like.
I think that we look at the world, hearing people use their phones to communicate, their devices. It seems very integrated into their lives. Very ingrained. It’s very smooth. They don’t need to think about quality. They don’t need to think about how do I make a call or they don’t have to think about how do I contact people? They just pick up their phone, boom. I’m jealous. Deaf people, we have to add on everything. We have to worry about video quality. We have to worry about all these sorts of things that hearing people don’t even think about.
So Zainab is right. I want the exact same equitable experience and I want more. I want it to be more seamless, more integrated. I don’t want to think. I just want to make a call and connect with people and I want to build these relationships with people, and not have to explain what VRS is or explain all these things and make people see it should just become ingrained, and no different than the Spanish and other languages that Americans already embrace in their system. They already think about that.
So that’s one note, the integration of VRS.
And then the second one is we can’t forget about interpreters. The interpreters on the other side of the call, if we wanted — we need to think about developing technology for them too. As a company, as a policymaker, as, you know, corporate big tech companies, we need to think about the interpreter’s needs as well, because I notice that during COVID or during all these — the VRS industry, the interpreters became tired of, you know, the whole virtual, you know, using technology at home. Then they have to use it outside of their home, so now they’re working from home, and it’s still not perfect for them. So we really need to advance the technology for both the deaf consumer and the interpreters, and that’s my hope.
>> JIM HOUSE: This is Jim. I hope Michael Scott has an interpreter now with him?
Okay, great. All right. Thank you.
So I missed the introduction earlier. If you don’t mind to briefly introduce yourself again and then respond to the first question about what a perfect VRS is. Is that all right?
>> MICHAEL SCOTT: Of course. That sounds fine.
Yes. Great.
So my name is Michael Scott. I’m an attorney advisor with the Disability Rights Office at the Federal Communications Commission. I’ve been there almost five years now. It’ll be five years next August.
So the perfect VRS system. I mean it is — I know it’s been hard to answer that question, because everyone, you know, has their differences and there are a lot of different people that we need to try to have that system meet, and one of the questions the FCC looks at a lot is should we be adding more forms of VRS to try to account for certain aspects, or variances to accommodate people who are — such as individuals who are DeafBlind or deaf plus? Should we be looking at a completely different type of system, or should that be better integrated into VRS, and what would that look like?
And as a regulatory agency, we end up having to turn to, you know, who’s doing any kind of innovation on the outside, what are we hearing from consumers about what’s happening, and we have these forms for the input, and that’s really the big question that we always look to.
So I don’t know if I can answer your question as to what it would look like, but I can say that we do want to help you get there, and we do spend a lot of time trying to get those inputs into those questions from consumers, from technologists, from industry, what it would look like, what needs to change, and then I suppose as an agency, we just need to be held better to account as to what steps we could keep doing to push things forward if there’s something more that we could be doing.
Thanks.
>> JIM HOUSE: Thank you.
And next, the next question, what technical barriers do we see? Are there any technology out there that we can use? Like some of you mentioned geolocation. And that can help with 911 finding out where you are. Then there’s GPS, which can find the apartment building if you’re in a high-rise, and it finds you by address. But how do we improve this? You know, like on the (indiscernible) access. So what floor of elevation are you on? So that’s another step that we need to add. But are there other technologies out there that we can take advantage of and add it into — build it into VRS?
Who wants to field that one first? Maybe Spencer? Do you want to start?
>> SPENCER MONTAN: Sure.
So we have to think about VRS as a relay service. So right now we’re seeing communication services out there. A term that you might be familiar with is over the top. OTP. And it’s like an app out there that, you know, SKYPE, WhatsApp, all of those are communication service technologies out there.
Is there a way that we can integrate this into the VRS platform? Would that benefit us as deaf and hard of hearing consumer to not miss calls or communicate with their loved ones, their friends, using those platforms?
>> JIM HOUSE: This is Jim.
Interesting. Interesting.
Lance, you wanted to say something?
>> LANCE PICKETT: Hi. Yes. Lance here. Lance signing.
I want to think about the technical obstacle barriers and what technology is already out there. So I’m thinking of FaceTime. You know, Apple’s FaceTime? And how can we integrate with that? Are there other messaging apps, like Marco Polo, Facebook, WhatsApp? There’s a lot of them out there, and how can we incorporate those into videos?
So if hearing people can send a video message, and if a hearing person sends it to me through any of those apps, I’m not going to understand it. I have to add on or have another person say, hey, what did this person say?
So that’s where the gaps are. I think we are at the big mercy of big tech companies. I have to wait for the big tech companies to develop something.
VRS, TRS, policy, government regulations. It doesn’t really allow a lot of space for experimentation, there’s no real place for innovation. We have to follow specific innovation things, what the FCC says or what the policy says. We need to buy this. So that’s what we do.
But it doesn’t give us a lot of room for, well, what if? Why can’t we do this? What’s doing about this? Can we improve this? So that’s one.
Yeah, that’s my answer.
>> JIM HOUSE: This is Jim. Thank you.
Zainab?
>> ZAINAB ALKEBSI: Hi, Zainab speaking.
So one missing piece from this, because that would be great. You know, if we had integration with FaceTime and other tools and messaging apps, but the problem is innovation is happening in that space without deaf people at the table. All this progress that’s happening with VRS, all this progress that’s happening with other video applications is operating in different silos. If we could bring everybody together into one — you know, if big tech companies took the — brought deaf people to the table and had this type of discussion, then I am sure we could make a lot of advancement.
You know, NAD does a lot of great work with a lot of technological companies, and I applaud their work in this space, but we still have to recognize that there is still a long way to go. And how we can integrate this and build all this in, rather than just waiting on, you know, waiting for things to be deployed, or hey, what about me? You know. Don’t forget about us. We always have to remind them, remind them, or we have to go back and address something, and we’ve been talking about this, a perfect world, for — a perfect world is one where not just deaf people at the table, but instead of saying, okay, well, that’s a technological area, okay, fine, we can’t do everything, we’re stuck, we’ll just keep things the same. We’ll keep it all the same status quo. No, no, no. Think outside the box. You know, come on, let’s try and address these things, and if everybody could share and pool resources, then no question, we could definitely escalate a lot more.
>> JIM HOUSE: Thank you.
Michael, do you have anything you want to add?
>> MICHAEL SCOTT: I will say to build off of kind of Lance and Zainab’s points a little bit, I mean part of the partner issue is we don’t have that full video integration anywhere. I mean you still can’t really communicate like Google to FaceTime. The technology doesn’t cross yet, and we all believe it’s possible, but we haven’t figured out that next step. I mean the FCC for its part had its North American membering committee create a subgroup to talk about this question and it brought a great diverse group together to say we need interoperable video calling. How do we make sure that 911 has that video capability? And the panel couldn’t answer everything. They got started, they created a good document and the FCC needs to look at it, figure out the next steps on that, and the industry needs to look at it and figure out our next steps, what are we doing, whether that’s big tech, the telecom companies or the VRS companies.
And that’s — I guess we keep waiting for that next step. I think we all need to get together and say let’s move forward, take another step forward. I think those FCC panels that were created were a good step. I’d like to see more.
>> JIM HOUSE: Yes, I agree.
People within the deaf community should be at the table and not only FCC and other big corporations who develop video platforms.
Okay. So now with our guests, what barriers can the FCC and industry help, and how can we respond with the researchers and developers. R&D, research, and development? Do we need more outreach to consumers, to other companies? How can we — any thoughts on this?
Let’s see. Michael? Michael, do you want to start this off?
>> MICHAEL SCOTT: Yeah. Yes, thank you.
So I mean — so procedurally, there’s just — there’s different kinds of places and gaps you’re looking at. What needs to be the step forward taken by the FCC? Are there other constituencies we’re missing? What do the VRS providers need to do? What do they need for their next technological progress forward, where they can go? I mean the FCC has spent a lot of time focused on correcting past issues of fraud, waste, and abuse, and, you know, it’s difficult to say, you know, that’s all in the past, because we have to be cognizant about it. We have to be watching for it. We have to make sure it doesn’t happen again.
So we take our steps forward. We try to loosen up the rules where we could. We — you know, one change that took a long time to make but was made was at-home video calling. We started a pilot program. We got that off the ground, and that actually had very opportunist timing because COVID happened because we really needed interpreters working in their homes and all the providers have a small mechanism for building on what they had started in the trial programs, and that helped, and now the FCC, we have pending petitions to look at our next steps there. Like how to change the rules we put in place on that if we need to and those are all questions that we’re going to look to answer.
And I’m trying to think of my next phrase here, the next thought I want to have.
There’s never been a direct roadmap for us, ’cause we get a lot of different questions and a lot of different directions and issues to address, and I think what’s helpful is when we hear from the consumers, from the individuals who can express, you know, we have these, you know, clear bottom line issues that we just need better functionality and then we can do things to try to address those, to make sure our minimum standards are up to par.
You know, one of the other issues that we had that the disability advisory committee looked at is understanding metrics, to understand if the service is functionally equivalent. We’re taking steps to improve interoperability and how to improve services, and how to make everything operable and so you have VRS and have interoperability.
We have a little bit of funding that we have made available for that in research and development. We have our rules that sometimes need to change. Sometimes they just are where they are, because of our competing inputs, but we always welcome that additional feedback about what do we step in and help with or what are we missing, what’s gap or connection haven’t we made? And I know sometimes that everyone would like us to move a little faster and we try to move as fast as we can, but it’s — the questions are before us and what we can answer, and what we can act on.
Thanks.
>> JIM HOUSE: This is Jim here. Does anybody want to add?
Lance.
>> LANCE PICKETT: Hi. This is Lance. I really liked what Michael Scott said. I also want to commend the FCC. When we had — when COVID popped up, (indiscernible) how do we work from working from home, and the FCC was very proactive and their response was very fast, but we were worried at first because typically things take time. You know, changes and policy typically take time, but during COVID, the FCC really stood up and helped and supported and allowed, and they waived some of these rules to allow us to be very successful as a VRS provider in this industry and to provide interpreting needs for the deaf and hard of hearing community.
I also want to look forward to the day when we can move beyond the fraud, waste, and abuse mentality that has been lingering for the last several years. A lot of discussions and considerations and rules were really developed to address fraud, waste, and abuse, and I would really like to move. I agree, it’s very important to prevent FWA, but at the same time we need to encourage this type of innovation and progress and our journey.
>> JIM HOUSE: This is Jim, thank you. Oh, Zainab, do you have something?
>> ZAINAB ALKEBSI: Yes, this is Zainab.
I just wanted to say that I recognize that the limitations that the FCC has, and bureaucracy can be a bit slow, and the need to develop a full record, you know, on the issue — I recognize all of that, but I do want to say that I encourage the FCC to, like I said earlier, not just for tech companies, but also for the FCC to think outside the box and to be proactive rather than reactive.
So that’s just one thing that I wanted to add to that conversation.
And like I said earlier, you know, FWA, that’s a thing in the past. Let’s focus on innovation and the future.
And speaking of the future, I did want to add a key to the previous discussion about technology and all this integration. Like all this interaction with FaceTime and personal conversation, yes, but I want to caution us, you know, VRS, using a virtual program, like on Zoom or Microsoft Teams and others, that is important as a backup option. There’s no interpreter. Okay, fine. Use VRS. But I want to caution everybody just to be careful because I don’t want the entities who already have an existing legal obligation and to take advantage of oh, we can use VRS, we don’t need an interpreter. I’m paying for — no, it’s important to still have this. It’s important to have interpreters involved because they know the correct terminology. Whereas VR interpreters, handle (indiscernible) so for those who have formal telehealth appointments, so those are in the formal context or informal. If you have a work meeting, with five people calling in, five deaf people calling in, and if all of them could use VRS, whoa, that’s an unnecessary waste of the funds. It’s going to deplete the funds quickly and we need to keep the funds very healthy.
So I want to be very careful about what these integrations are, great, but when is it appropriate for that and when is it not appropriate?
One real nice thing that we haven’t really discussed yet is integrating VRS with multi-player games, MPG, where players, speak with each other through gaming. That would be an awesome for deaf players to have and to use VRS in that space. That would be a gaming — that would be a game changer, no pun intended.
So I just want to add that.
Boy, was that a pun.
>> SPENCER MONTAN: I think it’s important to have that collaboration with all of these — have that NPOS, have a policy that benefits all deaf and hard of hearing and make sure that policies have an inclusive design to make sure that the product is accessible from the beginning rather than later, when you start from day one when you start building it out.
So the community is responsible to develop this and push them, and I know that corporations have their own timeline. I know they have their own goals, but it’s really important to get together and work with the community.
And as an educational university, we do case studies. You know, we collect feedback and surveys, focus groups, and dialogues with the deaf and hard of hearing community. So we collect all of this and then we can share it with FCC and other organizations. So very important to have this type of collaboration.
>> JIM HOUSE: This is Jim. Thank you.
Yes, I know in the past several years, you know, during COVID, everybody had to stay at home, and we had to change a lot of the rules related to relay services. Those rules will dissipate when COVID is over, but which rules do we want to keep? It’s good. We lose something, but some rules, do we want to go back to the pre-COVID rules?
So which rules would you like to see, and which temporary waivers would you like to keep?
Lance?
>> LANCE PICKETT: Hi, this is Lance.
Which rules to keep? Really I feel that, well, I’ve been thinking about the interpreters. But because of COVID, it caused a lot of interpreters to work from home, and that created a good decision, and it became — it gave interpreters more options. They don’t have to drive to work. They don’t have to go sit in the place. They can just stay home and support. So we need to make sure that those types of rules are still in place, that interpreters are able to work from home, that their environments are safe, private, that we don’t allow for third party or eavesdropping.
And remember before the FCC had a rule about limiting the number of interpreters that could work from home, and they expanded that number, and I think we should keep that expansion going forward.
I also think that, well, it’s not a rule related, but relating to the community has shifted their communication methods that, you know, Microsoft Teams and Zooms. I think that’s here to say, you know. I think we should promote and encourage that type of technology, but I agree with what Zainab said. We cannot allow that to become a replacement for the in-person interpreting needs. We noticed that there are some situations where organizations are out there that look, oh, well, this is cheaper. I’ll just go ahead and use that, and not hire an interpreter. And that’s not the right approach at all. It’s not.
We should have a balance that should make what’s appropriate for Zoom calls and what’s appropriate for in person. So we need to do some type of outreach and education to educate the community to keep in mind the needs of the deaf and hard of hearing community.
So that’s another area I just wanted to support Zainab’s statement about how the ADA law and the way everything is set up is creating a — it’s creating a focus on cost and it shouldn’t be like that.
So like doctors, schools, hospitals, and everything, when they think about interpreters, they think about cost and we have to remove that mentality. We have to be able to provide the services without having to go for the cheapest agency or the cheapest interpreter. Just look at the quality, get the right interpreter. Sorry for going off but I just wanted to mention that as well.
>> JIM HOUSE: This is Jim.
Zainab, you wanted to add?
>> ZAINAB ALKEBSI: Yes, this is Zainab speaking?
Yes, so two things. About COVID specifically and then I wanted to address Lance’s last point about the unnecessary focus on cost.
So it’s important, yes, but it’s also related in a way, but I wanted to expand on that some more.
Firstly, relating to COVID, I want to commend the FCC. I mean, you know, when you were talking about the need to be proactive, the FCC was quick right off the bat. Once COVID hit and people started staying home and social distancing, the FCC quickly made a lot of waivers to minimize the disruption of the relay service system so I really want to applaud them for that.
For example, you know, like the at home interpreting. I mean we were initially very concerned about this. You know, even before COVID, with the whole pilot program because of privacy and disruption of broadband at home, but at home interpreters, with the environment is like, but since COVID, these interpreters and the security measures in place, it really grew the pool of available interpreters, and we were able to have an accessible video relay system. So I really want to acknowledge and commend them for that.
And now Lance’s point about the unnecessary focus on cost, yes, you are absolutely correct. NAD has proposed in several contracts a communication access fund, and a communication redouble accommodation fund for these two. One for generally doctor’s offices, lawyers, and the other for employment.
And then the idea is that as soon as a deaf person comes to that office, that company, that hospital, they don’t have to worry so much about the cost of interpreters and the expenses and denying the deaf person access that they need. If they were to have a centralized fund for interpreters, that would remove that whole thing about cost and disincentivize the providers to provide interpreters.
So I think that would really greatly help tremendously with resolving a lot of those issues.
So I just want to add that.
I know it’s a little off the point, but really a lot of issues in that space, and I just wanted to address that.
>> JIM HOUSE: This is Jim.
Yes, there’s a lot, a lot of overlapping.
Okay. Spencer?
>> SPENCER MONTAN: Yes. This is Spencer.
So I just wanted to quickly explain about the video (indiscernible). We were working as an advisor the past couple of years, this concept is called IRIS and it’s named after a Greek goddess messenger. We want your mobile device to have full access as a hearing person — a non-deaf person’s device, so that means you have one number which integrates your dialer system.
So by going into accessibility settings, you can turn, you know, like your VRS and your CTS calls, you can have the option to call — communicate with these people, your loved ones, and we are actually developing a prototype now, a CAT, the center of access technology, a lot of different projects we’re working on, but this one particularly, project IRIS, is the most exciting and we’re hoping this project will fit the needs of VRS and the industry out there. Hopefully, it would lessen missed calls and we could integrate with your address book, your contact list.
So we want to empower consumers to decide what type of relay settings that they want on their platform.
>> JIM HOUSE: This is Jim. Thank you.
Okay. One more. Zainab, just one more?
>> ZAINAB ALKEBSI: Yes, just one more quick thing I wanted to add to Spencer’s point about one number for everything. Just to share from a consumer’s — deaf and hard of hearing consumer’s perspective. That would be absolutely beneficial for all of these deaf and hard-of-hearing consumers. Often I have to give out two or three different phone numbers.
For example, my doctor’s office. I tend to get this is my text number, because they tend to send me appointment reminders and notifications to confirm.
And then I have to give them my VP, my VRS number in case they want to call me and have some more discussions about my appointment, just to discuss anything on the phone.
I have to give these two numbers, and that’s really annoying. If I could just combine these into one number, that would be a huge benefit and I just wanted to add that.
>> JIM HOUSE: Michael, do you want to add anything to that?
>> MICHAEL SCOTT: So it’s difficult for me to jump in on that. To go back to your original question in terms of what rules the FCC should keep and not keep, you know, we have some pending petitions before us about making some COVID changes permanent, and those are up for comment, and the FCC is looking at those questions.
You know, I think generally we are open to making the technology — having the technology be better. You know the FCC’s mandate is fairly limited. You know, we have, in terms of — we oversee the VRS aspect of it, and then you have the whole other aspect of, you know when you have these group video calls, who’s putting on that call? Who’s responsibility is it to make sure their interpreters? That falls outside the FCC’s jurisdiction and it falls to the Department of Justice to make sure that people are providing the appropriate accommodations in the right places, and then you get into various workplace questions, which then kind of dovetail into different jurisdictions.
There are certain things the FCC can focus on and promote and be a part of, and there are other things that just fall outside of what Congress has told us we can do.
>> JIM HOUSE: Okay. How much time do we have left? Let me check my clock.
I know we have a few questions from the audience, but how do you get an interpreter to stop saying, you know, this person’s using sign language? How do we just start the call without addressing that?
So anybody wants to say that?
>> SPENCER MONTAN: I’ve experienced this before. As I call — once you join, just tell the interpreter at the very beginning, you know, and the interpreter will follow what you say.
>> JIM HOUSE: Thank you, thank you, Spence.
Lance?
>> LANCE PICKETT: I have mixed feelings about this because some deaf people want the interpreters to be introduced because hearing people can sometimes be confused with the voicing. I know this is a male deaf signer, but it sounds like a female. So my suggestion is right before you make it — the interpreter makes the call, let the interpreter know as Spencer said, I don’t want to be introduced, and the interpreter will always honor that request.
>> JIM HOUSE: This is Jim. And yes, you always have the right to change the interpreter according to your gender preference if they’re available.
>> LANCE PICKETT: Yes. That’s correct.
>> JIM HOUSE: Okay. Someone’s asking about since we have captions — let me read that. Okay.
Do you want to have a pilot program that has a deaf interpreter on the VRS call? You know, what happened to that pilot program? What happened to that concept?
>> ZAINAB ALKEBSI: Hi, this is Zainab signing. I’d be happy to take the lead on that one.
NAD, TDI, CPADL, which is an (indiscernible) deaf plus organization, we’ve all been advocating for that. We’re all part of a coalition called DHHCAN, Deaf and Hard of Hearing Consumer Advocacy Network, and I’m chair of that. We’ve been getting together and advocating this for a long time with the FCC to have deaf interpreters in the VRS environment.
I mentioned earlier that deaf plus, like deaf plus, the CP, does not have communication equity at all in VRS. They don’t have the same experience.
A lot of these interpreters sometimes have a hard time understanding what — like, for example, a person with cerebral palsy. Their mobile restrictions sometimes make it hard for the interpreter to understand and that’s why we need to have a deaf interpreter there so they can support and we’ve been asking for this for a long time. I mean we’re still — it still hasn’t happened yet. It’s been several years now.
So I mean, yeah, we’re still waiting for it.
>> JIM HOUSE: Someone asked how much — how many interpreters out there? I don’t know how you can define that.
>> LANCE PICKETT: I don’t know the exact answer, but there are about 8 to 10,000 VRS interpreters, but that’s not all of them work full time. Some of them are only, you know, a few hours a week or, you know, as a supplementary income, and others work in the community. So not all are full-time.
But most interpreters have already worked in the VRS industry at one point in their career. So I think most of them have experienced the VRS type of call, yes.
>> JIM HOUSE: Okay. Thank you very much, everybody.
Okay. We are right on time right now, so thank you very much. See the link to go back to the main room. Thank you, everybody. Thank you for your time. Thank you, bye-bye.
NGRelay: DeafBlind (breakout)
Keith Clark, Gabrielle Joseph, René Pellerin, Bill Wallace, and John Kinstler
Transcript
>> JOHN KINSTLER: Are you all ready, are we ready?
Yes.
Great.
Are the interpreters ready?
Wonderful.
I can’t hear anything so I’m hoping that the rest of you can, are able to hear.
Hello, my name is John Kinstler.
This is my sign name, J.K.
My visual description of what I’m wearing and what I look like, I’m a middle-aged man.
I have short gray hair, a beard, and a mustache.
I’m wearing a blue long-sleeved shirt and I’m standing in front of a black, or dark background and I wear glasses.
I will be the moderator for this panel.
I’m very happy to have everybody here today.
David’s presentation really brought me back, just really looking back at how far we’ve come since the TTY.
And as we saw in the discussion we’re trying to figure out what GA meant which is go ahead.
And for those of you who don’t know, I am old I’d like to give a brief description of the agenda.
We will be doing introductions with the panel lists and they will be elaborate a little bit more about their backgrounds and what questions will be posed to them.
There are communication rules.
If you could please disclose your name for both the deafblind interpreter and the CART to identify those who are talking.
We’ve already selected some questions for our panel and we wanted them to take the time to really think about how they would answer that when we would meet today.
If any of you have questions I do encourage that the audience please feel free to those questions at the Q&A option at the bottom which you’ll find.
If there are any questions that have not been addressed I’ll be sure to bring that to the panel and see if we can get those questions answered so it will be a very lax session.
I would like to introduce the panel first.
We will introduce them by their names, their position, and then we will address each question to those panelists and then they will elaborate the topic, and explain more in-depth what those duties are, what you use, what service provider, and so forth.
We’ll go ahead and start with Keith.
Hi there, Keith.
That’s Keith Clark.
And he is the chair operation — excuse me — according to accessibility management for the T-Mobile.
Hi there, Keith.
Next, we have Gabrielle Joseph and she is the Chair of Operations — excuse me, chief of operations.
So hello, Gabrielle.
Next, we have René Pellerin who is a DeafBlind advocate.
So hello to you, René.
Moving on next we have Bill Wallace who is the accounting adviser for the FCC home, Bill.
For the questions I’d like specifically to deafblind users, we’ll go ahead and start with Keith.
Keith, do you mind sharing more about your position, your label, and what your experiences have been?
If you’d mind just giving a brief description of your position.
>> KEITH CLARK: My name is Keith Clark.
This is my sign name.
And I’m here in Washington and I work for the T-Mobile headquarters here.
So I am currently wearing a gray long-sleeved jacket with a T-Mobile logo.
I have short hair.
I wear black glasses.
And my background is dark I work with accessibility and I work for the National Deaf and Blind Program which focuses on marketing and education.
Advertising for relay and other products with T-Mobile.
I’m a deafblind individual myself.
I do use both the services and work at the headquarters.
So I have two perspectives: Being a deafblind consumer and also providing services.
And I like to share how I use the relay service.
I’m not sure what you’d prefer.
So do you want me to hand it over to René?
>> JOHN KINSTLER: Share a description.
>> KEITH CLARK: Keith here.
I use the relay services usually with an interpreter, they make phone calls for me, we use video relay service and tactile interpreting.
In-person is my preference.
Another option is I like to use the IP relay which I develop my own products and then, of course, I like to use text messaging.
I communicate through both e-mail and text.
So those are several options that I rely on as far as communicating with family members, friends, personal contacts I do use Zoom or Facetime, sometimes Marco Polo, I go between the three.
I use English.
The best option for me is to have an in-person interpreter.
>> JOHN KINSTLER: John here.
Since the pandemic, how has that impacted you?
What kind of changes were impeded with your accessibility?
What barriers were set in place and what did that look like at home?
Do you mind elaborating a little bit more on that?
>> KEITH CLARK: So the pandemic, unfortunately, has set many barriers for the deafblind community.
Of course with the social distancing guidelines, we would have to stay six feet apart and many of us were very fearful of just making any contact with one another.
And with the CDC guidelines, because there’s a protocol of having to practice social distancing, many members in the community were hesitant in making contact when it came to pro tactile interpreting.
They would explain that they’re unable to touch and I would have to educate them on the best — but the FCC never implemented any type of rule that that was not allowed so we have to be more innovative, and, again, I did mention earlier that I do prefer in-person interpreting.
And this also applies to relay as well.
So since the lockdown in March, a lot of businesses and organizations have closed down.
There was no access to information.
There were a lot of people who had to use IP relay and text message versus using the pro tactile in-person interpreter so there were a lot of challenges that were placed for the deafblind community.
We all had very similar experiences.
And I’m sure others can add to that.
>> JOHN KINSTLER.: Thank you so much for sharing that.
So I’d like to go and ask René to share a little bit about his involvement with the deafblind community and how he uses the relay and what sorts of frustrations that René has experienced or other deafblind community members, what kind of experience they have experienced that they consider frustrating.
>> RENE PELLERIN: Yes, hi.
Hi, I’m René.
I’m from Vermont.
And when COVID first it, it was terrible.
It had such a horrible impact, really.
It was — it was the worst nightmare for anyone who is deafblind because we were so disconnected from the world.
We depend on touch using tactile communication and pro tactile, and that was gone.
Secondly, I’m the chairperson for Vermont Relay Services, and they would set up a meeting and then COVID hit us and we had to do all of our meetings virtually and I couldn’t get any information, I couldn’t get a tactile interpreter and I was thinking, shoot, I have to think of some software that could be put into my — installed in my computer for a deafblind person so we can use a relay to make relay calls.
So I would sign and then there would be text that would go back and forth and so I would reach out to someone and I would wait and I’d be constantly waiting I’m supposed to be facilitating that meeting yet there would be minutes that I would be sitting on the phone and waiting and they haven’t joined yet, I hadn’t joined yet so I had to tell them to go ahead and start the meeting while I’m still waiting for about a half-hour and once that time was done I’d have to hang up and I’d have to shoot an e-mail to them trying to figure out what happened at that meeting.
It was just so busy because so many people were waiting to make those calls, it was so long.
It was completely unexpected.
And for other deaf people, it was so — it was easy for them to make calls.
They would maybe get frustrated but they could make a call and they could make another call and get to another interpreter but for us, it was less accessibility and that was my experience.
I had such frustrations with that.
Another thing is that I’ve been trying to advocate, because I’m the AAD president and I finished my term, but during that time of COVID, especially when it started, we were trying to work and advocate with the deaf and hard-of-hearing EAN, the dean, the coordination network, we were advocating, and all of us were together in a group discussing all the issues.
We needed deafblind people to get free software, especially during the pandemic.
And so we were working on that.
We got a free lawyer service to help us put together all of the information.
We sent it out to the FCC.
And the FCC said they couldn’t do anything.
They turned it down.
They kept saying, no, it’s going to have to go over to Congress.
And the VRS services were saying, hey, we’re sending all of our interpreters at home.
And it was approved and all of these interpreters were working from home and there was more accessibility and it became unequal for all the deafblind people out there.
It was so bad.
And now, still, I mean, we’re getting along, everything is going fine, but even still, with the software itself, it costs money, compared to a deaf person who’s getting VRS software or a videophone software, that’s free.
But for a deafblind person, we have to pay.
It costs money.
And that’s something that bothers me.
We can go through a program if you qualify.
And then all the distribution programs, if you qualify, you possibly can get that for free.
But there’s a lot of things out there where you would not, a lot of people out there that would not qualify for that program and there’s a big disparity.
And so we have to figure out how we’re going to start figuring out what we’re going to do for the future so we can make sure it’s equitable for deafblind people as well as deaf people because right now there’s no equity.
And I can tell you there’s a deafblind man by the name of Brian, and he has a really long last name, it starts with an I, I don’t have it memorized, really bad memory, but anyway, I’m really impressed, because when we first started using a deafblind VRS called WYMXDB, it’s a software in Finland or Sweden is one of the places that created this software, and it was great, they started testing it, it came to America, and then all of a sudden we started — Brian was getting frustrated with the system and it was work — and was working with them to try to work out all the quirks and improvement.
They did another release and there were tons of improvements.
I could call my credit card company because there was — they suspected some fraud on my credit card.
And we used the system and it was great because I could read the information, we had a one-hour discussion with my credit card company which worked great.
So that was the upside to all of this.
But still, there’s a lot of restrictions all over the place where we can’t get the equipment.
So I could go on more and more about this but I’m going to turn it over and let you guys discuss this.
>> JOHN KISTLER: So there are other individuals who are excited, what are their options?
That has to be addressed.
There are a lot of inequities.
I’d like to go ahead and bring the next panelist. I’m introducing Gabrielle who is from Vermont and she’s going to go ahead and elaborate on her responsibilities, what sort of products have helped the deafblind community so I’ll go ahead and turn it over to Gabrielle.
>> GABRIELLE JOSEPH: Hello, my name is Gabrielle Joseph, I’m COO of Global VRS.
I’m actually out here in the sunny state of Florida.
It’s very hot here but doing well.
I am a middle-aged, white female, long, brownish-blonde hair.
Today I’m wearing a black shirt, I have a blue background, and I am hearing and sighted, and I go by she/her pronouns.
A little bit of background about me, I did grow up in a family where many of my family members are interpreters, so I do understand sign language and I can sign if you ever want to meet one-on-one with me I’m more than happy to, but my signing is, OK, it’s not great, I’m not a professional interpreter so I am going to use the interpreters today and I appreciate their work.
So at Global VRS, we are so proud of the work that we are doing to be involved in the deafblind community.
We know that we are invited guests here.
And while I am not deaf or blind, I have firsthand seen and heard the experiences and the challenges, especially this past year.
And we are the smallest of the four VRS providers, but we cannot stand by and just do nothing.
So we have really jumped in and tried to find creative solutions when it comes to VRS and the deafblind community.
And we have found some very exciting ways that we are making it work.
So if you’d like, I can go into a little bit more detail about that.
Is that the first question?
I believe, John?
Yeah.
>> JOHN KINSTLER: Yes, it is.
>> GABRIELLE JOSEPH: OK, fantastic.
So, in essence, we partnered with a company out of Sweden called MMXDB, which connects to the Global VRS service.
What that means is the consumer as René said can express themselves in sign language to communicate and then the interpreter will actually type back what the person is saying to them so then they can have it come through on their braille reader.
Just like any technology this started first with just a PC app.
And we’re so excited that it’s now being released and innovated to connect to Apple products as well which we know there’s a lot of new braille readers on the market in functionality through some of those iOS devices that are now making it even more accessible so we’re really happy about that.
But in general, that’s how it works.
This is great because, with the technology, you can customize your experience.
The contrast settings of how light or dark something might be.
How fast the braille reader will pick up the speed of text.
You can control that for your reading comfort.
And a host of other different things just to customize your experience.
So that’s great.
How do you get the product?
Some people know and some people don’t.
So how do you get the product?
Well, first you have to register.
In this case, you would register with Global VRS because we are a VRS provider.
We then make sure that you are entered into the URD, the user registration database.
And issued a 10-digit number.
You then have to get a license for the software.
Once you have a license for the software, you can then download it, log in, you can complete test calls with our customer care.
And once they give the numbers up that everything is looking good, off you go and you can make calls to the world.
So the process itself is a pretty simple process in concept.
Many have been successfully doing it but there have been challenges.
>> JOHN KINSTLER: Great.
Thank you so much for that explanation.
So we’ll go ahead and return back to our panelists.
The next panelist I’d like to introduce is Bill Wallace.
Bill Wallace is an FCC adviser.
Could you explain, recently with the IP relay what’s been going on with the, what’s going on with the VRS, with the deafblind community, would you mind elaborating a bit where the FCC stands today?
>> BILL WALLACE: Thanks, John.
Good afternoon.
My name is Bill Wallace.
I’m an attorney-adviser in the disability rights office at the FCC.
I’m a middle-aged white male with gray hair, glasses.
Today I’m wearing a blue shirt and a dark blue coat and I’m sitting in front of a white brick fireplace in my dining room.
And I’m based in Washington, D.C.
So I’d like to point out that with respect to IP relay, that the FCC, at its next open meeting which is August 5th, that’s a week — on Thursday, is going to consider voting on adoption of a notice of proposed rule-making that is directed at modifying the compensation methodology for providers of IP relay which is T-Mobile today.
But one of the important parts of that NPRM is that the FCC is asking for comments on who uses IP relay, what the benefits of the service are, what are the critical features of the service for the people who use it to help the FCC figure out how to sustain this service.
So we would — after the item is adopted, assuming the item is adopted next Thursday, the FCC will release a public notice asking — setting the dates for filing comments and reply comments.
Of course you don’t have to file full-fledged comments, in the commission’s electronic comment filing system it’s OK to file a brief comment which is just, you know, a brief typed into the — into the filing system.
And so we would love to hear from people who use IP relay, what it is that’s important about the service to them and even how many people use it and whatever is important to them about the service.
I’d also like to note that the — in terms — another matter that was recently released by the FCC.
And that is that the commission, again, authorized $10 million to be spent in the national deafblind equipment distribution program which is a program that is directed at people who have severe vision loss, severe hearing loss, and pursuant to the statute, which Congress passed, it is low-income individuals.
And by “low-income,” the FCC set the standard at 400% of the federal poverty guideline which is on — again, at the disability rights office website you can find — this can be accessed.
So right now that means about $50,000 for a single family — a single household, person with a single household and per household of about four that’s about $106,000.
So that program is available to get equipment such as computers, braille displays, et cetera that can be used to assist deafblind individuals to make calls to whoever they want to.
And I know that program helps thousands of people each year.
And as you may know, this is totally the — well, the FCC funds it, the actual distribution of equipment and the assessment of people and the training and everything else that is required for a deaf-blind person to use the equipment is conducted through state programs, state equipment distribution programs.
And those are the people that individuals would contact, a list of all the state contact persons is at the FCC’s website FCC.gov/accessibility under the national deafblind equipment distribution program page.
And I will turn it back to John.
>> JOHN KINSTLER: Great.
Thank you so much for explaining that.
Now that we’ve done the introduction and know who the panelists are I’d like to give an opportunity for our panelists, all four of you, if you have any questions or comments to any previous comments that were made from before, whether that’s with Bill, the FCC, René, Keith, any of you, if there’s anything you’d like to share about T-Mobile, I’d like to call on you.
I see that Gabrielle has raised her hand.
>> GABRIELLE JOSEPH: Thank you.
This is Gabrielle.
So in our experience in providing VRS to the deafblind community I know that I ended with that there’s been some challenges and with any new innovation in technology we know there are challenges but what we’re really trying to find are solutions, right, and that’s why we come to TDI and conferences like this so we can find solutions.
Some things that I would love to throw out there as part of the solutions are increased access to technology.
So as Bill had mentioned, they do have the national deafblind distribution program which is wonderful.
But we have experienced some people who have registered who can’t get the license for deafblind software because of those income gaps.
And so if there’s more discussion that we might be able to give to emphasize not having income gaps in order to get access to some of this technology that’s definitely something we’d like to put out there.
Not just that because there’s been a tremendous effort on behalf of the commission for this national deafblind distribution program which we’re very proud of, and to be supporting, but on the flip side maybe there are alternative ways that software can be developed for deafblind and funded in a different mechanism whether it’s an exogenous reimbursement mechanism or another tool where these costs can be absorbed and innovation can be made specifically for the deafblind communities.
We would love to continue discussions on that.
I think the other frustration we’ve heard from clients trying to use the service is the registration process.
So they go through a lot of work in order to get registered with the national deafblind distribution program and then they come to VRS, we have to get them in the URD, and then we have to make sure we get approvals, and then we have to try and help get the license for the technology and there are a lot of stopping points for them.
And they’ve expressed that they would really love to have a way for the national deafblind distribution program and the VRS registration requirement to somehow coincide and if they’ve already registered through one can they just be fast-passed through the other so they’re not having to duplicate the work with resources that are already so limited and so here at Global those are two things that we feel very passionately about and we will work together with any entity who is willing to work on those two initiatives to make the process easier.
>> JOHN KISTLER: Thank you so much for explaining that.
I did see René had his hand up and then I’ll turn it over to Bill after.
>> RENE PELLERIN: As far as advocacy goes and what that looks like, now, it’s been ten years, and it’s time to update, revise and reauthorize.
And propose that to Congress.
Now, we’re asking for advocacy for free videophone for the deafblind community, for equal accessibility, just like those in the deaf community.
But to stay on them.
And.
>> INTERPRETER: Sorry.
>> RENE PELLERIN: So we have to talk about how we’re going to get the videophones to make sure it’s equitable just like TTY, going back and forth with — from one person to the next.
So that’s one thing that we could do that we can figure out how to connect.
Something else is you can be able to connect to Sorensen VP and you can connect that way.
And so you can add a keyboard and that works, if one person is typing and then you can type back and forth.
And so you just add a different service onto what’s already made.
So something that we can — a starting point.
So that way we can — just thinking of more ways we can think about this and advocate for this.
Also the software benefits, more people than just deafblind people.
Because if you think about it, hard-of-hearing people can use that, they can use that TTY aspect plus seeing a picture there through the video and they could laugh, they could make a funny face at each other, emotions, as well as type the information, it’s great.
So there’s a lot of cool options that could come with this software so we can build on it and expand on what we have already and it will benefit more people.
>> JOHN KISTLER: That is awesome.
I’m going to go ahead and turn that over to Bill.
Now, we have a question for you.
This is from the audience.
And then I’d like to go ahead and ask the next question so Bill, go ahead.
>> BILL WALLACE: Hi.
This is Bill.
I’d just like to say that I think Gabrielle’s ideas are interesting.
I think one of the things I’d like to make clear is the disability rights office and the FCC generally would like to hear any ideas about how to improve communications access for people who are deafblind or people who are deaf or vision impaired.
The idea of integrating the deafblind equipment program and the user registration database is certainly an interesting idea.
The deafblind equipment program is creating a new database to track equipment distribution that’s under way.
So maybe it could be rolled in.
I’d also like to point out, just so it’s clear, that the equipment distribution program has a low-income requirement that was placed in the statute by Congress.
So equipment distribution is at this time limited to low-income individuals.
And the TRS services, I know that some get — some services give away equipment, but TRS is the only service that is regulated is the service itself under the TRS rules we don’t regulate equipment.
The equipment distribution program is the only program that distributes equipment.
And the people, TRS providers, who choose to give away equipment do that on their own volition and I — so that’s — that’s the big difference between the equipment that’s distributed by the deafblind equipment distribution program and the people — other TRS providers so that’s not really — it’s not really comparable.
So just wanted to make that clear.
And I’ll turn it back to John.
>> JOHN KISTLER: Bill, I have a question from the audience.
The question states: Are there any opportunities to work with the FCC to explore mandated and provide the communications to facility — communications facilitators from VRS providers so that there is equitable accessibility and for those individuals who are deaf and blind?
>> BILL WALLACE: John, could you repeat the question?
>> JOHN KISTLER: Sure.
Do you have an opportunity with the FCC to explore mandated communications facilitators for accessibility to explore mandated services from — video relay services that are equitable for individuals who are deaf and blind?
>> BILL WALLACE: Well, I — I mean, certainly our goal is to provide equitable services for people who are deaf and blind as well as deaf and vision impaired.
So I’m not sure what — I think what I would need to know, we would need to know at the FCC is what is the barrier that needs to be removed in order to make it equitable.
And that’s what we’d like to know.
So I don’t know that if the person can answer that question.
But what we — the starting point is to know what is the barrier that needs to be removed.
Thanks.
I’ll turn it back to John.
>> JOHN KISTLER: Gabrielle, do you want to expand a little bit on that as far as what your experiences have been with working with corporate, working hand in hand with the FCC, and what needs to be provided for the deafblind individuals?
Do you mind elaborating a little bit on that?
>> GABRIELLE JOSEPH: Yeah, absolutely and first and foremost I think the FCC does amazing things with such a small team of people and they have a huge workload on them.
And so there are so many different topics to tackle.
But we have to push the needs of the deafblind community further up on the top of the priority list.
So, for example, just taking something as simple as technology, right?
If we are VRS providers, then today where else to go to get technology?
Maybe you get the technology off the shelf, right?
Which is great.
You can get computers and laptops through the national deafblind distribution program.
But, again, if you qualify, right?
And so these products are excellent, but how many of the products are really targeting the needs of the deafblind community?
If you download an app used for a general hearing individual, that app should not perform the same as if you are a deafblind individual needing to use the service.
There are additional needs such as connection to braille readers, connection to JAWS readers, screen readers, high contrast, the speed at which communication is flowing in between those products.
That today is not written in the rulings.
There are very broad strokes that just say communication should be established.
But communication for one specific community does not fit the communication modes that maybe the deafblind community needs.
And so, you know, I think there’s opportunity in the future to maybe write into the rulings maybe interoperability or technology specs that are specific to the deafblind community of specific settings that should be available to all and not just, again, if you happen to be under a certain income gap.
And while that’s not the FCC and that’s Congress, how do we make changes to Congress to make this more widely available for all?
And as mentioned in the original presentation, RTT and real-time text is something that is supported, I believe, out of the three out of four VRS providers today.
And if RTT can become much more widely available and standardized, it gives a much different experience.
And with my business hat on, I might be cutting myself out of some have work, right?
Because with RTT you might not need VRS and so some VRS providers might be scared of that, others might embrace it.
Here at Global embrace it.
We’re about communication, we’re not about the almighty dollar but it still takes a lot of dollars to run a business so there’s a balance there, right?
That being said I think RTT standardization is a really big change that could be formalized among all providers so that you can have that experience during any relay call and that would also be a significant impact to the experience.
>> JOHN KISTLER: Great.
Now, turning it over to Bill.
Bill, I have a question for you: How do deafblind individuals reach out to the FCC if they want to submit a complaint, is there an official point of contact, is there a dink that you can share where a deafblind individual can submit an appeal?
If you want to add a little bit on that, Bill?
>> BILL WALLACE: Thanks, John.
This is Bill.
Yes.
If you go to the FCC homepage and click on complaints, there is a specific category for accessibility complaints.
So we obviously read every complaint that we get at the disability rights office and that’s one way to let us know about issues that may be facing the deafblind community with specific pieces of equipment or specific services.
And I’d also encourage people who use IP relay to, you know, file comments when the comment period begins both as individuals and advocates organizations to file comments and let us know what is good about IP relay, what needs to be changed, what needs to be improved for communication access for deafblind individuals.
Turn it back to John.
>> JOHN KISTLER: Thank you, this is John.
I’d like to check in with Keith, are nay, I’d like to give you an opportunity to add any thoughts.
OK, Keith, I see your hand up.
And then — go ahead, Keith.
>> KEITH CLARK: So this is Keith speaking.
So in regards to innovation and quality for technology and telecommunication, what I’d like to recognize is the transition within the next few years.
We are moving very fast.
It’s at a very rapid pace.
T-Mobile strongly believes that we have bold innovation and to lead to change.
That is a top priority that we have.
And our goal in getting there is that it’s possible to design and build.
And once everything is accessible, then it’s the users’ experience that’s the most part of this procedure.
That’s where deafblind individuals will be able to have the same accessibility, for those who have everyday accessibility today.
So once that deafblind consumer shares that they’re able to use these services effectively, then that means that we have the opportunity to lead, ask questions, outreach, reach out to the community members, because, again, I’d like to emphasize that those consumers are very important to us.
And what that looks like in the future, that change cannot happen without us so it’s important to include everyone to take part in this project.
And some of the challenges that we may find with other companies and with the FCC that’s one, we would become involved with the committee, their business, and to see where the space fits for those deafblind community members.
Without having us a part of that conversation we’d also like to have the opportunity to lead.
>> JOHN KISTLER: I’d like to turn it over to René.
René, I saw you had your hand up.
>> RENÉ PELLERIN: This is René here speaking.
First of all, I’d like to share that it’s important to provide free videophones.
Because I am deafblind, I did not get a free videophone because I’m considered a third party.
So Sorensen and other video relay services produce their own products and distribute that.
But when it comes to other companies like Global Ventures, they should provide those products for free, but they don’t.
Anyhow, so now what my hope is that Congress will eliminate the cap on funding and to reach out to Congress to see what the next steps are, what to expect.
>> JOHN KINSTLER: Thank you so much for sharing.
I think we are running out of time here.
I am looking here at the time.
Looks like we’ll be wrapping up shortly.
Now, before we close this session, do any of the panelists want to elaborate any more on some thoughts, feelings, anything you feel that the audience may need to know as far as FCC goes and how sighted individuals can really advocate for the deafblind community?
Are there any last remarks before we close the session from the panelists?
I’m going to give it just a moment.
Gabrielle?
>> GABRIELLE JOSEPH: Just in closing, I’d like to jump off of what Keith had mentioned about leadership.
More deafblind individuals have to participate in testing new equipment.
Innovation is happening, change is happening, technology from three years ago is not what it was today and it’s going to be even better next year.
But there has to be a more concerted effort from all providers in all areas of technology and innovation to include the deafblind individuals to do the testing, provide the feedback.
That’s something that we put a concerted effort into.
And part of the reason there was such an improvement on the product we have from a year ago is because of the feedback and participation from individual users taking the time to give feedback.
So we thank all those who participated.
If you haven’t tried it, step out on a limb, try some new technology.
It might be a little scary, it might be a little hard.
But we’re going to be here every step of the way.
We would love the feedback.
Because it truly is your feedback that creates the change.
>> JOHN KINSTLER: I’d like to go ahead and turn it over to Bill and then we’ll wrap it up here shortly.
Go ahead, Bill.
>> BILL WALLACE: I’d like to echo what Gabrielle said.
Feedback to the FCC is also important.
So, you know, file comments, docket 03123, 03-123, you can read the NPRM on the commission’s events page under the agenda for August 5th.
Anyway, thank you.
Thanks, John.
>> JOHN KISTLER: Great.
So unfortunately I’m just — I’m going to go ahead and close the session because we are out of time.
I do apologize.
But before I do that I’d like to thank the panelists, the interpreters, the audience, and for those who participated and most importantly the sponsors for making this happen.
I hope that everyone — on behalf of TDI I hope everyone enjoys the conference, please enjoy the next session, the plenary and I hope to see you all.
Enjoy yourselves, bye-bye.
ASR Captioning
Larry Goldberg, Verizon Media
Transcript
LARRY GOLDBERG: Hi, folks, Larry Goldberg here, head of accessibility for Verizon Media and long-time friend and fan of TDI. We're going to talk a little bit today about a subject I think so many of us are interested in, concerned about. And that is automatic speech recognition captioning with the subtitle, better than nothing, or is it?
As I said, I'm the head of accessibility for Verizon Media and met many of you when I was head of the caption center at WGBH and the National Center for Accessible Media. So I'll talk a little bit about that background as well. To describe myself, I'm a white male with a white beard, and let's just say over 60 years old.
My background a little bit, as I said, I come from WGBH where so much of Media Access began, originally at the caption center. And then when we united with Descriptive Video Service there, formed the Media Access Group. And then in the early '90s, created the National Center for Accessible Media, where we began to realize that media was rapidly changing into the digital world and that we needed to do some R&D and policy development around media and accessibility. That was from 1985 to 2014.
Just seven short years ago, I left WGBH to join Yahoo, which then became Verizon Media, and which, if you read the news, will again become Yahoo this coming fall. And the seven years there, I've had a lot of opportunities to do some interesting work, which I'll talk about in a moment.
I come to you today as someone who has been a caption creator, a manager of caption services, both live and video on-demand or pre-produced, I've sold captions at the caption center, and I've bought them while at Yahoo and Verizon Media. Very involved in making policy, legislation standards, and guidelines. But really, most importantly, I'm a caption user. I began losing my hearing maybe about 10 years ago, and I rely heavily on captions now in all environments. So I come to you with all of those perspectives.
A little bit on my background. When I say policy, I was really honored and pleased to be able to work with many in the community on the TV Decoder Circuitry Act back in 1990. 1996, the Telecommunications Act rewrite, which required captions on almost every television broadcast. And more recently, the 21st Century Communications and Video Accessibility Act, which has had such a major effect on all of our lives.
The standards I've worked on include time text formats for online media, web content accessibility guidelines from the World Wide Web Consortium, and how video players respond to user requests and requirements, which I hope you've all investigated in your settings on any kind of video player you use on your computer, on your mobile device, even in your set top boxes.
In terms of innovation, I've really had a great opportunity to work on everything from local news, real-time captioning, to movie theater captioning through the rear window captioning system you may remember, and certainly the Internet Captioning Forum, which then launched captioning pervasively across online services. More recently, very excited to be working on virtual reality accessibility and captioning. It's a very important part of that.
For a very brief history of captioning just in cases there's some younger folks in the audience. From 1971 where there was really little to no captioning on television to today, 2021, we went from pretty much zero to 100% in 50 years, starting with linear analog television going to the web today. When I say 100%, in this case, I'm particularly talking about what we're doing at Verizon Media, which I'll tell you about in a moment.
In terms of video conferencing, part of what we're doing today and what we've been locked into for the past year and a half, two years, we went from zero-- pretty much no available captioning on video conferencing platforms-- to what I'll call "meh" today. We've got captions, and they may work in certain ways in certain environments, but that's really a big part of the evolution has been happening rapidly over the past few years, and certainly, the issue of automatic speech recognition captioning. And that's what I want to talk to you about today. And I think probably with the panels after I'm done will be talking about it as well.
What we're talking about for ASR is comprehensible captioning. Captioning that really works, that really explains content and enables everyone to be on a level playing field. We used to say that using ASR for closed captioning was 10 years away and will always be 10 years away. But, you know, I'm a convert. I don't believe that's still true. But as I'll explain, I don't think we're quite ready for prime time yet either.
Going back to the history I mentioned at WGBH for that great trivia question that was once on Jeopardy, the first captioned TV program that was open captioned was in 1972, Julia Child's "French Chef," followed very shortly thereafter by the caption to "ABC Evening News." As fast as live captioning could happen those days was a six hour turnaround between the time ABC aired the news program and when PBS rebroadcast it with captions on the CBS-- ABC network. That was the world of open captioning.
And then, finally, in 1980, closed captioning was finally rolled out with the efforts of PBS, and ABC, and the federal government. And eventually, the National Captioning Institute came to be under those new technologies. I have a picture here of Archie and Edith Bunker with the caption that says, those were the days. And they sure were. And a picture of the original telecaption decoder device that many of us had to plug into our TV sets. Didn't even have a remote control back then. There were knobs you had to get up and turn.
I have a picture also of rear window captioning and the icon for that, which was a piece of film with captions CC symbol. And that was a reflective device that were installed in certain movie theaters around the country, particularly around Disney World and Disneyland, eventually, subsumed by wireless and personal technologies-- wearables and seat displays.
But what really changed our world quite a bit was the 21st Century Communications and Video Accessibility Act of 2010. I have a picture of the senator from Massachusetts. At the time the act was passed, he was a Congressman. Ed Markey, really a great hero to me and to many of us who helped advocate for so many of the advances we've had, particularly around captioning.
President Obama signed that law in 2010. Many of you may have been there at the White House back on October 8th when President Obama signed it. And then, subsequently, the Federal Communications Commission put together a committee. I was co-chair on the Video Programming Accessibility Committee and passed the regulations that affected us even today. And there is an open proceeding right now to think about what kind of changes should come to the CBAA now that so much time has passed and rapid changes in our digital technology?
Let me talk a little bit about what's happening today at Verizon Media. We are captioning all of our online streams on all of our platforms, certainly desktop and mobile. I have a picture here on screen of Yahoo Finance, which is deeply engaged in capturing their live "Bell to Bell" coverage that's eight hours a day and all of their video on-demand that's available on the Yahoo Finance app and website.
Also showing in this picture, the settings that you can adjust on every video player that the FCC requires. Your style, your size, your color, your font. And it happens to be that our video player, you can actually adjust the positioning as well as at the top of the screen or the bottom. And that's true on your mobile device as well.
The big news just earlier this year just back in March was that Verizon Media committed to capturing 100% of every video on every platform. And we were so happy to be able to do that. And the unique part of that, which I'll talk about, it's not only the media that we create, it's all the media and all the video that we license from so many partners. In addition to that, at the time, we also donated $5 million to a number of organizations to have free advertising on our platforms. And that was really a great opportunity as well.
So when I talk about our own media, I'm talking about Yahoo Finance and Yahoo Sports, Yahoo News, AOL, Makers, TechCrunch, Engadget. So much of our own media that we create that we also caption using both live and video on-demand services. But it's the fact that we actually caption all of our partners' media as well. And that-- I've got a long list on screen here. Everything from ABC network and local to Vogue.
Now some of these partners actually provide captions to us, and we are working to get a lot more of the captions that they've already produced up and online. But any partner that doesn't happen to have captions available today, we actually, with our own funds and our own efforts, caption all of those partners' videos as well. And we're pretty proud of the fact that even though it's not required by the FCC, we're captioning all of this content from so many of our video partners.
Now let's talk a little bit about caption quality. As Peter Drucker, well-known business analyst, once said, if you can't measure it, you can't improve it. And from this point on, everything I'm going to say is my own personal point of view and my own opinions, not necessarily something that Verizon Media or Yahoo has weighed or measured.
But the issues really go back to-- back in 2010, the WGBH National Center for Accessible Media created the Caption Accuracy Metrics project with Nuance Communications, recently bought by Microsoft. And we looked at how you can measure the quality of captions and created a new measure called weighted word error rate. Traditional measurements of speech to text translation only talked about word error rate without weighting the importance of the errors and how severe some errors are and how minor some others are. And we launched that project. Still available. The results are still available online.
In 2014, when the FCC was looking into requiring caption quality, we approached them and said, we have a way of actually measuring that automatically. At the time, the FCC wasn't comfortable creating metrics. They thought it might be too burdensome, so they came out with the requirements for accuracy without necessarily looking at the nuances, pun intended. What makes for a bad error, and what makes for a minor error?
Now here we are today. Automatic speech recognition is on the rise. We are all looking at how well captions are handled in automatic speech recognition. But still, we don't have metrics that are agreed upon across all of our platforms. So I'm going to propose that maybe it's time that we do create reliable cross-industry metrics for both human generated and machine generated captions.
Here's a little bit about the word error rate. Today, calculated as substitutions, deletions, and insertions all added together as forms of errors divided by the number of words. Now in the weighted word error rate-- there is on screen right now a complex calculation. But basically, it adds one extra dimension, and that is the severity of the error. If you dropped the word the, or perhaps even misspell a word that's commonly used, they have different levels of severity. And that needs to be taken into account because as we've all experienced in both human generated, steno, and voice writing captions as well as ASR, not all errors are the same, nor should they be weighted the same.
So we're proposing that the actual patent that was awarded to WGBH and to Nuance way back in 2014, Quality Assessment of Text Derived From an Audio Signal-- you can look it up yourself. Patent number 8,892,447-- actually proposed a way of automatically deriving a reliable standard for errors and for accuracy. Now since that time-- that's now seven years ago-- there's been many advances on the ability to measure the quality of text derived from an audio signal. So I think it's time we moved on.
The issues that we need to look at, the ones that both humans and automatic speech recognition engines will struggle with, include such challenges as use of Speaker ID. Now some video conferencing platforms can attach the speaker to who the words are being spoken, but you don't see that everywhere online. It's actually kind of rare. However, stenos could do that relatively well.
Non-speech information. Some automated systems today can identify noises, music, tone of voice even, but that's not common. And those are the issues that when we start seeing automatic speech recognition rolling out, or perhaps even replacing humans on any of our platforms, we have to take a good hard look at can they handle these other challenges? The other major challenges is noisy environments. When someone is talking in a crowded environment or where there's a lot of background noise, that's when the machines begin to fail and when humans, and the Human brain, and human ears can really discern what the speech is versus the background noise.
We're all kind of frustrated-- I know I am-- when someone doesn't use a microphone. Like, I'm using a very simple one today. And that's when bad audio gets fed into the machine translation, and then that will result in mistranslation or word errors. When someone has a very heavy accent or speech impairment, machines again are failing today where a human can discern some of the audio that should be turned into text.
Also, as you see a lot online, you'll see it right in Yahoo Finance where people are talking over on top of each other. Again, automatic speech recognition has a lot of problems discerning that. And certainly, as you look at broadcasting, when there are panel discussions, when there are a number of anchors talking over each other, again, humans can handle that. I know the machines eventually will, but not so much today.
And then, recently, we came up with an issue of profanity and censorship. Now we've always believed in the world that of companies that actually create captions. We shouldn't be censoring audio. If a hearing person can understand the content, then a person who relies on captions should have equal access to that content.
Some automatic speech recognitions which are nervous about using-- having a profanity slip through when it wasn't actually spoken will often put on screen, for instance, S asterisk asterisk asterisk. And they will substitute that and probably do too much of a false negative there because that's not actually what the person said. Or the F word, or any other profanity. Well, the speech recognition engine shouldn't be substituting those censorship when hearing people can hear it, or it's not actually being said. And I think that the speech recognition engines can be improved so they don't have to act as a censor for what should be heard.
I want to just put an example up on screen right now. A lot of people like to put caption errors up on their screens. They'll share it in social media, particularly the ones that are coming through on ASR these days. And they can be amusing. They can be fun. And my title here says, not just oops, lol. And, in fact, it can be quite insulting. It could be quite damaging. And the example I put on screen right here is a screenshot of a video conference where the speaker's name, Shakuntala Acharya, is rendered as chuckling villa by the ASR captioning technology.
Now that's not an equal access. And it's actually quite insulting to the speaker where the ASR engine simply couldn't understand her name. Now perhaps it could have been fed in advance during this panel discussion. Certainly, human captioning services really ask for those spellings of names in advance, but ASR technologies do not tend to do that. And I for one don't appreciate when someone's name is so badly mangled using a speech recognition engine.
So right now, the FCC has asked for comments on reviewing. The CVAA, and the deaf community, and TDI among many other organizations have suggested that it's time to set some metrics. I think we're ready. And the suggestion I'm making today is it's time for a Turing Test for closed captioning. Now a Turing Test was developed by a technologist named Alan Turing, and defined as a test of a machine's ability to exhibit intelligent behavior equivalent to or indistinguishable from that of a human.
And for closed captioning and for what any caption generation device can create today, I believe the vise versa, the opposite should apply as well. We should establish a high level playing field for humans and machines alike. And they should both. Whichever way the captions are being created, by machines, humans, voice writers, should have an equally high level of accuracy that we should meet. And I suggest that the FCC, in fact, should find a way to apply these technologies. And so we together can appreciate and have a much better higher level of captioning however they're generated, whether it's by machine, whether it's by human, or whatever next technology is coming along.
So with that, I'll turn it over to the panels, who I'm sure we'll have much to say about this, and welcome you to comment to us at Verizon Media. By the way, when I said 100% captioning, yeah, we'll miss a few things. So you can write to us at accessibility@verizonmedia.com. Comment on what I had to say today, but also comment on our captions-- good, bad, or missing. And we want to hear from you. We want to hear from TDI and everyone else who relies on captions to comment back. Tell me what you think about what we said today, what I said today, and what you think about the captions we're providing, as well as what anyone else is providing. And with that, I'll say thank you for inviting me and see you around.
>> LARRY GOLDBERG: Hi, folks.
This is Larry live here.
I know so many of you have been wanting to see my face so here it is, enjoy the lip-reading.
We have a couple minutes before the panels start.
They’re going to be very excellent.
I was not cloned by the ASR, I recorded that session previously.
It’s a really important issue you’ve heard about today.
I really think that everyone needs to chime in on this.
Christian Vogler from Gallaudet has been doing some deep research into exactly these issues of metrics and how ASR could compare to human captioning.
It is, it is a tough science.
I think there are many sciences, many technologies that could be applied here.
For example, when there’s bad audio, ASR really begins to fail.
Well, you can filter the audio, you can improve the audio that gets fed into the ASR engine and add those sorts of improvements to improve both ASR and human-generated captioning.
I’ll leave the logistical questions to our hosts here and answer one of the questions here how would metrics be implemented?
The metrics would be implemented under the present questions of the FCC is it would become a requirement.
There are caption accuracy standards that are more guidelines and suggestions that the FCC has issued around latency, accuracy without a specific number, placement and completeness.
So the question is will be the FCC be ready, willing and able to provide some specific technical data the way they do with broadcast signals, even telephony.
This is one of the issues that we’re interested in seeing.
Do all broadcast stations use software that can detect incoming text?
No, that’s hardly common for broadcast stations to have the ability to alter.
>> AUTOMATED VOICE: Recording in progress.
>> LARRY GOLDBERG: It really is the responsibility of the originator of the captions to make sure they come through and then get passed through.
I notice that we have a little issue with the interpreter and captioning but you have on screen now the panels.
There are three breakouts.
I want to see all of them.
And there’s one on TV, and the links that you can put in your browser are there, one on web captioning and one on IP CTS.
I think I probably should go ahead and let you get to those panels.
All of our very good friends and people we work with closely are going to be speaking about the issues on TV captioning, web captioning and telephony.
So with that I would advise.
>> AUTOMATED VOICE: Recording in progress.
>> LARRY GOLDBERG: Comments through TDI or directly to the FCC on this new proceeding on updating the CVAA.
And with that, I will turn it back over to the host and say good-bye and hope to see you all in person sometime soon and let’s see how the panel deals with this question.
Thanks for inviting me.
Take care.
cc: TV (breakout)
Karen Peltz Strauss, Christian Vogler, Larry Walke, and Opeluwa Sotonwa
Transcript
>> OPEOLUWA SOTONWA: Ope here.
>> I’m sorry for the technical difficulties we’ve had.
My name is Ope Sotonwa.
And I’m a TDI board member at large.
I’m honored to be a distinguished moderator today.
So thank you all for joining us.
Before we start housekeeping rules to help us facilitate communication today in this beautiful, wonderful discussion we’re going to have.
We do have captioning.
Many of us know I’m black, I’m wearing a suit, a gray suit with a blue tie and a white button-up shirt.
I am bald.
So now I’ll turn it over to the wonderful panelists that we have today to introduce themselves and share with us a little bit of background about who you are and tell us why you were chosen for this panel (chuckle).
So I’ll start with Karen.
Or Christian.
>> CHRISTIAN VOGLER: Hi.
My name is Christian.
I’m a white male.
I have brown hair.
With a little gray that’s poking through just now (chuckle) and I’m wearing a black shirt and I have a blue background.
So thank you.
Let’s turn it over to Karen.
>> KAREN PELTZ STRAUSS: Hi, I’m going to use the interpreter because we have limited time, OK?
I’m Karen Peltz Strauss.
And I’m a white female with short brown hair which was long last week because I had let it grow through COVID and am now donating it.
I am wearing a multi-colored dress and I have some silver jewelry on.
And I guess I’m middle-aged or maybe older than that but young at heart.
Sending it over to Larry.
>> LARRY WALKE: Hi, thank you for having me.
My name is Larry Walke, pronounced as you walk down the street.
I’m a white male, black hair, the mid-50s.
I have a bright blue buttondown shirt on.
And a little bit of gray also, like Christian, and looking forward to the panel.
Thank you.
>> OPEOLUWA SOTONWA: Thank you, Larry, Christian, and Karen.
Really, thank you so much for being here with us.
Your expertise, your resources, the knowledge that you bring to this discussion around captioning is great.
30 years ago we would have never thought that deaf people would have equal access at every level.
But with the ADA, we are able to challenge the status quo.
This means we can demand change and actually change and benefit people who have hearing loss in the deaf community.
And that also leads to the passage of another law, the 21st Century Communication, and Video Accessibility Act in 2010.
That law requires captioning at every level.
So we really want to hone in on and talk about television.
It’s already come so far with new technologies coming into play.
We’re trying to see the potential impact that those changes that we’ve seen throughout time.
And some areas that we might need to roll up our sleeves and keep doing that hard work to improve the requirements for accessibility.
So that’s why I want to have this discussion today.
The audience, if you have any questions, please use the Q&A feature, and as you talk and as we’re talking as a panel, please use the Q&A feature, and we will keep an eye on that and make sure that we bring your thoughts and your questions into this discussion.
I’m curious, panelists, from your experience, since you’ve seen all the shifts and the change in technology in the world that we have today, including captioning on TV, what do you think — what’s your perspective on what’s going — on the landscape today?
I think Christian was going to start.
I see he was ready to jump in and start that so go ahead.
>> CHRISTIAN VOGLER: Christian here.
I have a lot to say here.
I am very grateful for Larry explaining what’s going on.
And how the Internet has actually impacted cable.
I know that it’s helped — that sort of frames my argument.
Do you mind, Larry, speaking on that first?
>> OPEOLUWA SOTONWA: This is Ope.
>> LARRY WALKE: I guess I’m trying to — assume what you’re asking is captioning on television as opposed to captioning on Internet video programming?
Is that what you’re asking?
>> OPEOLUWA SOTONWA: This is Ope.
Yes.
It’s a little different.
The 21st century focused more on Internet, right?
And for TV, now we’re assuming that cable has already developed their technologies, that’s in play already, right?
And we feel that’s in the right place.
It’s not perfect.
But it’s perfectly accurate, or accessible.
But now that we’re transitioning to an online time, it’s cheaper and more accessible, where you can access cable at home and, like, streaming services and anywhere you go, really.
Even though the online world doesn’t seem to have the captioning down quite right.
So with the use of artificial intelligence, AI, ASR, and things like that, it’s becoming an issue.
And there have been some experts and industries in the field that feel like it’s good enough.
And there are some deaf and hard-of-hearing people, community members who feel it’s not quite there.
It’s not enough to meet the needs or accommodate people who are deaf or hard of hearing.
And there are some issues still with, you know, inaccurate captioning, dealing with accents, not being able to accurately portray that message.
So as a TDI board member, often we talk about TV craptioning meaning it’s no good so we’re labeling it craptions.
So we’re, like, oh, gosh, this captioning doesn’t work.
So what are your thoughts in regards to that and what you’re seeing as we transition from, you know, cable to technology?
AI, how wonderful it is, but at the same time, you know, we need to have a standard policy in regards to this.
What are your thoughts?
>> LARRY WALKE: Well, I mean, first I should make clear I’m not a technologist or a technical person by any stretch of the imagination.
And that I work for the National Association of Broadcasters so to the extent I have any familiarity with, you know, the rules and obligations for captioning, they have to do with your traditional over-the-air television stations and what rules and policies apply to them.
My supposition is that there’s probably a much more wide variety in the quality and even the availability of captions on online video because there are no SEC policies or rules requiring a lot of that video programming to be captioned.
Now, if there’s a video program or a video clip that’s been shown on over-the-air television first and then subsequently shown online such as on a TV station’s website or their app or perhaps some other online outlet that they have a relationship or arrangement with, well, then that program must be captioned with the same quality as it was when it was shown on television.
But, of course, there’s just, you know, an unlimited, tons and tons of video programming and clips that are just online and have never been shown on television before and, you know, the question of the F — the question of whether those clips and videos have to be captioned, at least under the FCC’s rules or policies, so far that’s an open question about whether the FCC has the legal authority to require Facebook, Netflix, you know, and the millions of other out — you know, Internet websites, require them to caption their content or not.
I’m not really smart enough to opine on whether or not the FCC has the legal authority to require that.
I know TDI has strong views on that.
National Association of Broadcasters or NAB, where I work, we haven’t taken a position on that.
It’s a fairly thorny issue because it brings in the larger — the larger issue of whether the FCC, what kind of authority they have over the Internet or online content.
And for better or worse, that’s not something that NAB right now has had to concern itself with.
So my guess is that everything you say about, you know, the range of quality of captions online, from good to bad, I’m sure it’s — I’m sure it’s correct and perhaps, you know, more — more government intervention or something needs to be done there.
But I can’t comment on, you know, why some online outlets have better captioning and probably a lot of them have — have craptions, as you say.
>> KAREN PELTZ STRAUSS: I can jump in if you like.
So, as you know, I’ve been involved in these issues, well, probably for around four decades.
I used to work at the National Center for Law and the Deaf, later named deafness, at Gallaudet University, later at the National Association of the Deaf, and then did two tours of duty as deputy chief of the FCC’s consumer governmental affairs bureau and in these various capacities, I helped write the laws and regulations governing captions.
When we first started drafting the CVAA, the 21st Century Communications and Video, and Accessibility Act, we saw what was happening in terms of video programming.
We went to Congress and asked for that law to cover all kinds of video programming including online video programming but that was back in 2007, the law got passed, as you mentioned, Ope, in 2010 and it just wasn’t ready for prime time for Congress to require captioning on just about everything that looked or resembled television captions.
And so all that we were able to get at that time was that if it’s been shown on television and if then it is delivered via Internet protocol it has to be captioned and as Larry mentioned, the quality rules that the FCC adopted while I was there in 2014 also go with that transition over to Internet protocol.
But there are huge gaps and the TDI and many other disability consumer organizations including some research institutes such as the one that Christian works at having asked the FCC to take further action to address the virtual explosion of online video distributors.
Kudos go to the National Association of the Deaf who were successful in getting a settlement after a lawsuit against Netflix many years ago which prompted Netflix to caption nearly all of its programming.
That was a real turning point, after which Amazon followed suit and Hulu has done a fairly good job as well, although they’re third in line and some of the more major streaming services are just automatically captioning understanding how mainstreamed the accessibility services.
But this has gone way beyond even those more mainstream types of streaming services.
There are now studio-specific services as well as such as Disney Plus and Showtime and Peacock and Apple TV and live TV focused services such as Sling DVR and YouTube TV and specialized offerings such as Acorn and BritBox and ESPN and I don’t mean to call out any of these but the list goes on and on and watch Facebook, Twitter, Instagram, and Snapchat.
It’s truly overwhelming and this doesn’t even get at the devices that people use or the operating systems that people use so we all know that when we log on to one form of one device with one operating system with one streaming service we’re going to have a very different captioning experience than if we log on to another, the captions maze or may not be there, it may not even be because they weren’t provided, it may just mean we don’t know how to access them.
So it’s overwhelming, the changes that have occurred.
And when we wrote the CVAA we really tried to make it future-proof and it was next to impossible.
So there is an opportunity to go back not only to the FCC and have it exercise its authority to the maximum extent possible and close some of the original categories — categorical exemptions that still exist, for example, there is not even a requirement right now for commercials to be captioned.
I saw that on the chat.
We tried to get that back in 1996.
We were not successful.
There’s not a requirement for new networks to provide captioning until they’re four years into their business which is fortunately not an exemption that’s used all that often, most networks start from the beginning, but it’s there.
They can actually skip the first four years.
There’s a lot that the FCC needs to do to rectify some of the problems that are within its jurisdiction like the categorical exemptions and then I think there are some things that Congress is going to have to attend to.
>> OPEOLUWA SOTONWA: So Larry and Karen, thank you.
Before Christian jumps in here, I would like to take a moment to say something.
TDI does very well focusing on the interests of the consumers.
And based on what Karen and Larry just shared with us, it seems that the industry experts, those who are in need — had defied ASR and we need to get to the point where they feel like they’re meeting the needs for the closed captions.
But as a community, we’re feeling like not enough has been done.
So wondering what you think about potential opportunities for us as consumers, as customers, to gain the data that we need to use based on our experience and the shared experience from using these captions.
Specifically on the TV with ASR and then maybe informing the FCC and trying to strategize may be with you, can we have that discussion?
Christian?
>> CHRISTIAN VOGLER: OK.
So, first of all, I want to imagine HLAA, the Hearing Loss Association of America had put out a survey two years ago and they did ask a consumer experience of captions.
So we are analyzing that survey.
And there’s a lot of negative comments in that survey.
It’s quite shocking.
And we have filed that analysis with the FCC.
So that’s talking about captions in general, not focusing on ASR only, but in general.
So wanted to mention that because often it’s very hard for consumers’ perspective to be in there and which type of caption and method they’re using to receive the services.
So just want to clarify that it’s not specifically in there.
Yeah, it’s not clear which we’re talking about.
So it’s just not clear because we do have that variety.
And that is one reason why sometimes it’s hard to identify exactly the ASR.
You know, if they’re on TV, that’s an old system.
Very old.
That, like, first — those standards of captions are, like, 40 years old, they’re over 40, those standards.
And even though digital TV has come about, those are 20 years old.
So it’s very old.
Everything is very old.
So that means, then, that a lot of the issues that are showing up with captions are old, and it’s on with that, but we need to talk about Internet and broadcasting and the pipeline there, about how we’re sharing and getting this and what opportunities might be for mistakes to show up.
So it does cause a lot of issues.
And sometimes it can look — what it looks like may not be — we may be placing blame on the wrong person.
We place it on the captioner and oftentimes it’s on the technology.
So we have to confirm exactly where the problem is.
If it’s ASR and then what technology we’re using.
>> OPEOLUWA SOTONWA: Ope here.
So Christian, how do we then identify who is using ASR and who is using just the standard captions?
>> CHRISTIAN VOGLER: Christian here.
Um, if we can, you ask, you ask the participant — you ask the human what are you using?
Are you using ASR?
So you would ask them.
So you’d have to ask the broadcaster.
And then sometimes you can ask the type of error, you know, it can look like it’s from the ASR.
So if we — as an example, maybe they try to clarify, but they’re wrong.
And so we can blame the ASR.
But also — so there’s the electrical technical part, the ENT, the electric — so yeah, Newsroom, the Electric Newsroom.
So sometimes we’re getting some errors from that space as well.
So, yeah, it does depend.
So I’m going to give it over to you now, Larry.
>> OPEOLUWA SOTONWA: Ope here.
My question is how could we — who do we hold responsible, then, for these sorts of errors?
And how do we get ahold of the right person and so we can make the policy change that is necessary?
Larry, I saw your hand go up.
Did you want to make a comment here?
>> LARRY WALKE: Um, yeah, well, Christian is right, of course, that the best way to find out what kind of captioning is being used is to ask the broadcaster.
I did ask, before the panel, I did talk to a few owners of TV stations, some TV station groups.
And some of them are using ASR here and there.
Some are using — not using it but having it run in the background while they’re using live captioners.
Some informed me that during COVID, when there were lots and lots of live public events like governor’s press conferences and things like that, they had a lot of problems often finding a live captioner and during those circumstances they used ASR.
Then there’s one — one station mentioned that they’re using ASR but during the exempt hours early in the morning.
And this is not statistical, it’s only anecdotal, but all of these stations and companies that are trying it here and there report no change in the number of concerns or questions or complaints that they receive from consumers.
And some of them expressed surprise at that because a lot more people are at home watching television.
They also mentioned, like Christian said, that some people can tell because I guess ASR just rolls a little differently on screen than a human captioner.
But some of these stations, some of these people mentioned that, at least from their point of view, and when ASR is trained properly and cautiously and extensively so that it can learn the lexicon and the dictionary and a station is subscribing to the service that feeds in new names and things like that and, you know when it’s used cautiously and properly and trained enough, it works very well.
And in certain respects, they’re anecdotal observations, it’s better than live captioners in certain respects in terms of completeness, catching every single word.
Someone else mentioned something very interesting to me which I didn’t know which is that there is — when you use ASR, there is a specific reliable amount of latency between someone talking and the captions being produced.
Let’s say it’s one and a half seconds.
When you use a live captioner it can be anywhere from one second to five seconds if it’s a really tough word or something.
And they said because of that consistent period of latency, they can delay what goes out over the air and match up the speech with the captioning much closer and much more reliable and consistent.
And that was something I really didn’t know about until a few days ago.
So kind of as, as Larry — consistent with what Larry Goldberg said on the previous panel, I — my impression — and, again, this is, you know, third removed — is that ASR is making a lot of progress in the last couple of years.
But maybe not ready for prime time.
That’s — that’s another discussion.
But in certain respects, it seems to work very well and consumers don’t — don’t tell the difference, can’t tell the difference.
And it can — you know, it can be valuable when it’s difficult to locate a live captioner on short notice.
It can allow more content to be captioned.
These are some of the anecdotes that I was informed about when I was asking some companies about their use or whether they’ve tried to use ASR.
So…
>> OPEOLUWA SOTONWA: Thank you so much, Larry.
Karen, turning it over to you.
>> KAREN PELTZ STRAUSS: So I generally agree with Larry that there have been tremendous strides made in ASR and it can work really well for prerecorded programming where somebody can check the accuracy of what is provided and can save time in that way but I’m still worried about it for live programming for a number of reasons.
And many of them were enumerated by Larry.
When you have people who have varied accents, where you have the specialized vocabulary, where you have proper names, where you have overlapping speakers, where you have background noise or background information, as you know, sometimes when we watch television it will indicate whether somebody’s whispering or screaming, ASR is not going to do that.
It’s great in emergencies, it can be good in emergencies, and, again, it can also be detrimental if it’s not accurate.
And there are studies, unfortunately, that raise concerns about the error rates of ASR when used by persons of varied dialects and races.
And the FCC Commissioner Jeffrey Starks, for example, pointed out a study by Stanford University in 2020 that showed error rates that were almost twice as high for people who were African-American as they were for people who were white even when each speaker spoke the same words and was the same age and gender and there was another study by the national academy of sciences again, that revealed disparities again because of variations of speech caused by regional dialect.
So these are challenging for ASR systems.
Again, I don’t want to be a naysayer.
I think that tremendous strides have been made.
But we have to really be careful because if — mis- — if misinformation is provided, especially in news, live news, that can be really harmful to the people watching those programs.
The other thing that I want to mention is that, for better or worse, I think for those of us stuck at home for worse, a lot of us are working from home and so, for example, relay services have moved almost entirely to the home environment.
So while it may have been a need to hire — use ASL [sic] when captioners could not be found at the beginning of the pandemic the fact is people can work from their hoaxes, provide captions from home, my son provides audio descriptions from home, not only doing the writing of audio description but the voiceovers from his house and the equipment were not expensive to purchase and very easy to provide.
The other thing I want to comment on with respect to what Larry said are complaints, unfortunately, when I was at the FCC, we noticed that we did not get a lot of complaints on these issues either although I will say that before the captioning — there have been periods where complaints have come in, but the lack of complaints is not a good indicator for the lack of quality.
And one of the real reasons for this is that people who are —
>> I was going to say that, Karen.
>> KAREN PELTZ STRAUSS: Do not know what they can’t hear.
So if the captions are bad it’s really hard to tell that the captions are bad, that’s the first reason.
The second reason is that people just don’t know how to complain.
I mean, as much as — when I was at the FCC we would try to inform people on how to do it.
Not everybody is connected to a national organization, for example.
And then the third reason is that people are busy.
You know, especially during this pandemic.
Things have been turned upside down.
While many of us stayed at home, all of us stayed at home, many of us had children to educate, grandchildren to babysit.
We didn’t have a second to ourselves, it wasn’t like we had leisure time.
So there are lots of reasons that people don’t complain.
And so it’s really not a good indicator for the accuracy of ASR.
>> OPEOLUWA SOTONWA: Yes, thank you so much, Karen.
Before we jump to you, Christian, I would like to say, very interesting to read the research that’s on this.
There has been some evidence that hearing kids who watch TV with captions on are actually learning better and are becoming smarter.
And so little bit concerning when we use ASR, that the captions may have these errors because the kids are able to, you know, correct that.
And so it — deaf and hard-of-hearing children are not able to make those corrections.
It impacts the people who are learning as well.
So that’s one of the reasons we’re feeling like consumer perspective here is critical and key to making this change successful.
So I will go into Christian now.
>> CHRISTIAN VOGLER: Yes, Christian here.
First, I’m a strong advocate of what Karen has said here.
Our research does verify all of that, that the errors in captioning — that deaf people don’t know they are actual errors because they can’t hear the input.
If you’re reading the caption transcript and compare it to what actually is being given audio-wise, it is shockingly bad.
The captions look good, they’re understandable, but you can see a lot of things are missing when you compare the two.
It’s quite amazing.
So there is a big difference in having a human captioner who is able to figure out how to pull that information and condense it down to make it an easy-to-read thing so we need to think about that.
ASR cannot accomplish that.
So human and ASR both have missing information, I will say that.
So just — it’s in different ways.
So talking about how, then, to make — like, who to hold responsible, right?
Larry G., Goldberg did mention in his presentation about captioning metrics.
And so we do need to set up and have some sort of metrics.
It is very complex because the metrics need to be neutral and they have to apply to both the human and ASR captions.
It’s not a simple thing to — it’s not a simple thing.
Human errors, ASR errors are very different.
And we haven’t seen any metric that can capture both of those in the right way.
Some of the metrics do apply very well for ASR.
Other metrics apply specifically for the human captioners.
How to combine those and bridge them together needs more research.
>> OPEOLUWA SOTONWA: Ope here.
Thank you so much, Christian.
Wonderful food for thought here, and I agree with you.
So wanted to add to this conversation that the metrics need to include the consumer live experience, the consumer lived experience.
Not just the metrics that are given by industry experts who are making decisions for people.
Next question to you all: This morning I was watching the Olympics on television and I noticed the captions sometimes were being covered, were covering, like, information that was on the screen, who was on there, what country they are from.
So I felt like I needed to be — needed some flexibility about where the captions might live on my screen.
I really wanted that feature.
I wanted that flexibility.
At this point, I don’t have that.
They’re in one place.
Do you guys have thoughts on if we should push for that feature and that standardization of allowing the human user to have the flexibility of moving the captions on the screen, not to where some industry has set up for us, but that they — you know, on the top or the bottom, and someone else has decided, but — because in the end there has been created some barriers, I’m not able to see some of the information on what I’m watching.
Any thoughts about that, guys?
Open for discussion.
>> KAREN PELTZ STRAUSS: So when the FCC issued its quality rules in 2014, it looked at four different categories of quality: Accuracy, sing chronicity, making sure the captions are synchronous with their corresponding dialogue, program completeness, making sure the captioning runs from the beginning to end and placement.
There’s a rule in place that says captions are not supposed to cover up other important onscreen information such as faces, feature text, or graphic information essential to understanding or accessing a program’s content.
However, as you saw today and as I see all the time, this is not always followed.
And one of the reasons is that there is so much material on the screen at this point that it’s hard — I’ve been told it’s hard to find a place for the captions where it’s not going to cover up something.
When — in 19- — I’m not sure — 20 — I’m not sure of the year.
When the FCC adopted captioning display standards, I think it was in the early 2000s, yes, it was, around 2000.
It required the ability to control, display the captions in a lot of different ways; the font, the size, the color, the background, et cetera.
But it didn’t require the ability for consumers to be able to move the placement because technologically it was not feasible.
I think that — I’m going to hand it over to Christian at this point because I think that that may be changing with online captioning.
I just — for example, with Zoom, you can move it around.
And I love it.
I love to be able to move it around.
So and my television right now is basically a web device.
It actually talks to me, to my dismay, it has sent us messages while we’re watching it, it’s a brand-new device and apparently knows who we are.
I have to change the privacy settings to get that to stop.
But I think that it’s going to be possible.
And I would love, for one, to be able to control the placement.
So handing it over to Christian to hopefully tell us that we’ll be able to do this in the future.
>> CHRISTIAN VOGLER: Yes, Christian speaking.
Definitely, I’m in agreement with you, the technology does exist now.
We can move the captions wherever we want, we do have that technology.
And I also agree with Ope there, users would definitely benefit from being able to move the captions up and down.
And it should be possible.
It would be a very valuable option.
So I would say part of the issue is with television, which is still old technology.
Again, I’ll say it again, the technology is old, what — what the digital technology that we have there is very old.
So I would say we have to have that — I’m in agreement that the technology — the TV manufacturing and technology people have to get together to improve the technology that is in there and the standards that they have.
Because it’s 20 years old.
We need to get that replaced with something new.
And it does take time.
It doesn’t happen in — overnight.
But what we have now on the TV, the TV manufacturers, what we have now can provide the option to move that, the technology is there.
And I think — I may be wrong — but I think that maybe LG does have that option.
Don’t quote me on that, though.
>> OPEOLUWA SOTONWA: Ope here.
>> CHRISTIAN VOGLER: Yeah, sorry, trying to remember which one it was.
>> OPEOLUWA SOTONWA: Thank you.
It’s good to know that that technology is there.
Now the question is how do we convince those TV manufacturers to actually adopt the technology and allow us the flexibility and the freedom that we need to have those adjustments made?
So should we, you know, write a new policy?
Or get the FCC to write a new policy to guide them and to lead them?
I think that’s something that we definitely need to have some thinking sessions about because I know some of you are very involved at the policy level.
And so hopefully you’ll have that on your mind as this conversation starts today and continues on.
We only have a couple of minutes left.
I’m looking that there’s a couple of questions in the Q&A box so let me take a look at that.
So one of the questions asked is why when I’m watching TV, and the captions are just terrible, but if I’m watching something on a different station, if it’s far away, they’re so much better, what’s the difference?
Do we have some thoughts on that?
>> CHRISTIAN VOGLER: Sorry, this is Christian here.
>> OPEOLUWA SOTONWA: Yes, Christian.
>> CHRISTIAN VOGLER: Sorry, I was trying to type my answer in the reply box but here I am.
OK, Christian here.
There are two reasons why this might be.
The first one will be that it might be the errors are coming through the actual broadcasting, from the station.
So the broadcaster, then, sends that, and then it has to broadcast out.
And there’s a lot of steps that it gets before it gets to your TV and then the TV has to interpret all of that so there’s a lot of steps there.
Second, it might be that if you have two different broadcasters and they’re showing the same video, but they might be hiring different captioners for that video.
>> OPEOLUWA SOTONWA: Ope here, thank you so much.
I would add that it also might be that the person who is watching might be watching on cable as opposed to the person who is watching maybe on an Internet streaming service, which might be another reason why there’s a disparity.
Moving on to another question.
Just give me a second here while I read it.
OK.
The question here is we have talked about it and maybe answered it but simple question: How do we get telecommunications — or TV captions standardized?
Does anybody want to answer this?
Karen, please.
>> KAREN PELTZ STRAUSS: So, well, the NAB actually has been really helpful for many, many years on captioning and it’s because of much of the work that they did that enabled us to work with the FCC and say look at all the captioning that’s available, I’m talking about back in the 1990s and 1980s, to help push things over the edge and to get Congress to require captions.
But the industry has just gone so far, and my experience in advocacy, that while working with industry is very important and collaboration is very important, again with the National Association of Broadcasters, it was very successful on many levels, at some level you need to go to the policymakers again.
And, again, there has been just a revolutionary list of changes in this industry over the last decade that the FCC needs to attend to.
So I think that there’s just truly a laundry list of issues that have been presented to the FCC in comments, the FCC has an open proceeding on accessibility issues and TDI is one of the leading organizations that submitted comments in that proceeding.
And I think it’s incumbent on the FCC to take as much of its authority as it can to move forward, including making new standards.
And whatever the FCC cannot do, Congress needs to do.
And some of us are already talking to Congress.
So I don’t think I’m revealing anything.
I hope to — I don’t know whether we’re going to do another effort like the Coalition of Accessible Organizations but we’re going to try to update the laws.
They constantly need updating.
And I’m afraid that unless they’re not updated, the laws and regulations, those standards may not occur.
>> LARRY WALKE: I would just add that Karen’s on the right track there.
Most people watching television now, don’t know if they’re watching a broadcaster or cable programming channel or something else and when the FCC has authority over certain video outlets and not others and the other folks are doing it voluntarily there’s just going to be a wide variety.
The broadcasters who are regulated do everything they can to produce good quality captions, it’s good business for (?) reasons they want to reach as wide an audience as possible, they don’t have any interest in cutting anybody out, especially these days when there’s so much competition.
But, yeah, if you’re watching TV and you don’t know what you’re watching, it could be something that is just — needs some more policies set to it.
>> OPEOLUWA SOTONWA: Thank you so much, Larry.
Thank you, Karen and I want to say thank you, Christian, as well.
Really want to continue this conversation: This has been a wonderful experience.
Our time, however, is up.
So I want to say thank you to each participant and each member of our panel.
We all learned something.
And what I would like you to bring home is everything we’ve talked about here, do continue these conversations and spread the word about the need for change and for the needs of this community.
Thank you so much.
Thank you to our supporters TDI, we really couldn’t do this without you, we couldn’t continue this work.
And also we’re using this opportunity to reset, to reset all of this.
And so this is an opportunity for all of us to reset as well.
Thank you so much.
Have a wonderful day.
Bye-bye.
>> Thank you, everyone, bye-bye.
cc: Web (breakout)
Joshua Pila, Rikki Poynter, Heather York, Sean Forbes, and CM Boryslawskyj
Transcript
>> Testing, testing. Are you able to hear the interpreter? Testing.
>> Hello. Hello, hello, everyone.
>> Hello. Okay. I think this is everyone, right? Are we waiting for one more person? Oh, I see Heather. Okay. Great. Excellent. Okay. And the interpreters? Okay. All right. Let me begin. I will introduce myself, my name is CM Boryslawsky. It’s a pleasure to be here, I’m on the board, the northeast region. And I’m also the treasurer for TDI. As you can see, this is in my home, I’m not at any particular office. I want you all to be yourselves, be comfortable. I want this conversation to be really interactive and engaging. We are all here as advocates for access. Please feel free to introduce yourselves now. Anybody can start.
>> This is Sean Forbes, I have always been an advocate for accessibility for the entire, my entire life. I would like to provide accessibility for music, we have a captioning company that we use by the name of ASL caption. Our primary focus is sign language captioning. But we also do a lot of audio captioning as well.
>> Next?
>> As part of my job, I’m NAD — also own a captioning company, dynamic captioning, and — towards this discussion today.
>> I’m Heather York, I work at VITAC, I have been involved in captioning for over 25 years with the largest captioning provider entertainment in the country. Right now I’m in charge of marketing and government affairs. I’m a huge fan of TDI and have worked with the group for about 15 years now. I worked on the 20th century and accessibility act, I have been on three advisory disability companies, and I have worked with caption quality, clips, and rules and requirements. Our company specializes in live captioning for media, movie to streaming, I asked to be on this streaming group, I think that’s where the future is and I’m excited to talk about it. Also excited to be wearing my co-panelists T-shirt. No more craptions, thanks.
>> Thank you, Heather. Thank you, thank you.
(Laughter).
>> My name is Rikki Poynter, this is — he is just going to be here, but if any of the hearing people hear an actual screaming cat, that would be Simon, he may come in and out, I apologize in advance, he has going to scream whether or not I close the door. I definitely will be speaking throughout the conference this panel because I did grow up mainstream Deaf and English is my first and the main language, I’m more comfortable making sure that what I want to say gets through and is understood. So, yeah, and for the last, I want to say, may as well say a decade, I have been a YouTuber. I moved from doing make-up to talking about being mainstream Deaf, not death, I’m becoming — auto correct. Oh, my God. I moved from — I started talking about being mainstream Deaf and finding a Deaf identity, learning ASL, and also accessibility primarily with captions on YouTube. I start a campaign and here we are.
>> Okay. Wonderful. Thank you so much for those powerful words.
Okay. Now for all of you, can you share about some of the gaps that you have seen in streaming, in announcements that have been made, in clips that you have seen? On TV? Tell me your experiences, what have they been. Does anybody want to volunteer to begin?
>> I can start. This is Heather. Because there are rules, the FCC rules that require captions on television content. There are rules that require captions on streaming content that was first broadcast on TV, but they’re really aren’t FCC rules that manage streaming content. Everything, like I said is moving or at least growing, one master’s tournament that you would see on a Saturday or Sunday, now you have a streaming channel for every hole and every golf competition, none of it is mandated by the FCC. I think there are agreements in place regarding the ADA, but we are seeing a lot of these streaming channels go out there and growing. The broadcasting channels are dedicated to accessibility U, but I think we are losing out in quite a few places where we are not seeing captions being thought of first and foremost as much as they are in the TV world if that makes sense.
>> I will go next. This is Rikki. I will say this in case there are any blind Deaf people here. So, I’m a YouTuber and a streamer user, I play games and stuff like that, we do have the ADA, and recently, and more recently we have the, I forgot, the CVAA or something, something like that, is the online equivalent of the ADA in terms of, like, Deaf accessibility or something. But, yeah, there’s like — it’s right across the board. Especially with, like, professional YouTube channels, say NBC has put their shows on you have YouTube and their captions, this professional — and then you have people like me who will either outsource captions from a company like — or media or whatever, and people who also write their own, mine get outsourced mainly from — media, sometimes I have an acquaintance who, we just recently started talking about this, he runs a — channel and his captions are very, very good, but sometimes he will outsource and he would get one of his caption files had quotations around movie titles, and that’s not grammatically correct, but then — will be like, our, what’s the word we are looking for? Our catalog or guidebook says that we put captions on movie titles, but then we went on to the actual catalog and says they don’t put quotations movie titles, I felt like everybody has their own set guidelines on what they do and it’s so confusing when you are trying to figure it out. Oh, there’s Simon.
So, so — creators who will write their own captions, sometimes they have this really, long first sentence, and then there’s one tiny word on the second line, and it’s, it was a little bit of — what’s the word I’m looking for, a gap, I guess you would say, I mean, the words are right, it’s just sometimes the actual formatting of them gets confusing, and then obviously there’s the automat tic captions format where everything looks like this and it’s writing a couple of words here and then, and then stop, and, no grammar, things like that.
So, I feel like it’s all over the place. Next person.
>> I will jump in and say that — have come a long way since they were first founded, first and foremost. I think the technology is here, one thing I will say as a newcomer into this area, I feel a lot of frustration as a Deaf person, with a lot of these different caption companies, you know, there’s no consistency across the board. And, you know, you can automatically see when you are watching any program whether it’s a person typing it or a — AI Captions, for example — with my team and what we do because we are primarily a Deaf captioning team, is that we are always emphasizing that our client are the Deaf community. What’s going to make the Deaf community enjoy watching this content because I will tell you, there are many things I have ranted to watch on Hulu and Netflix, and Amazon if I start watching it and if the caption is are ahead or behind, not perfect, it ruins the whole experience for me. So, you know, it’s like I feel like there’s still a lot of things that need to be resolved. As I said, the technology is there, and sometimes with the FCC and with the limitations on things, limitations can sometimes be a problem.
>> I would like to — who would like to volunteer next? Joshua? You have your hand up.?
>> I want to add a little bit to the discussion, I will start by saying I should have said it in the introduction, not only am I the lawyer who has to work on these issues, but on behalf of broadcasters and content companies, but — since I was 4 years old and my two sons, 11 and 9, I have been a caption user since the 1990s, which puts me in an interesting spot, because I’m both a user and I’m working with, for and with the companies that are producing a lot of their content. So it’s kind of like the old hair club for men if anybody remembers those. One of the things I do want to point out, I think Rikki said this a little bit when I’m online, I find that even though the CVAA and the FCC’s rules don’t apply to content that was not originally on television, meaning that a jurisdictional issue, not to — legally, but the statute does not cover it, the major place, the Hulu’s, they require that Their content providers provide captioning. One thing I noticed, and going account with what Sean was saying, when they first started requiring the caption captioning, there was captioning with subtitles and it was off timing and the contractual phrase, you must have captioning and that was sort of it. And they complied by putting something there. I will say that especially over the pandemic, I have seen better captioning on Netflix and Amazon and Hulu for content that was only online, and therefore, not required by law. But one of the interesting things I have seen, and I think this is because of teleworking and multi-screen worlds, that I have seen more medium-sized players use subtitles because people are watching videos on their phones without audio. I’m forgetting about all of us who need — but sort of from the — able-bodied, the general population, there has been a trend towards subtitle, are the subtitles very good? Not always. But they are more than I saw a year ago. So, you know, Heather will know that you don’t often hear me be optimistic, it’s not generally my nature, but in the last year, I think that because so much of our life has moved online, and captions and subtitles are used not only by us on this call, but by my wife who could hear perfectly fine, that I’m seeing a marginal improvement. I think there are medium-sized creators, YouTube personalities that are taking that extra effort to include them. Thanks.
>> Absolutely. (background conversation).
>> Okay. Now, Rikki, I think you had your hand up.
>> I love this feature. Okay. Sorry. It’s funny that you mentioned, or — was mentioning it or Joshua, too, I was watching a movie on Hulu last night, and some of the captions weren’t even on. It would be like talk, talk, talk, and it stops, I’m like, what is it saying? It just stops right in the middle. I’m like, what the heck? That was a little bit annoying.
>> I know there’s a lot of work to do ahead of us, with ASR, the impact of ASR, does everyone know what ASR means when I say — Sean is saying auditory speech recognition, that has improved our situation, or do you feel like we haven’t seen an improvement because of that?
>> Joshua, go ahead.
>> So, I think — automatic — recognition, I know Heather can give some more information about this, the different types of ASR, it’s not just one monolithic. It’s using ASR to produce caption content, but there are many different types of ASR, whether that’s human coordinated ASR or machine learning ASR, so, I just — before we get into the ASR discussion, I do want to not lump it all into one monolithic category, but to say that there are many different versions, many different companies, many different technologies, but generally it is the inclusion of artificial intelligence, and I wouldn’t don’t just presume that the YouTube ASR is all ASR, that has a very specific purpose. But I don’t know, Sean, I don’t want to jump you, but I know Heather talked a lot about these different kinds of ASR because we have been on panels together to talk about them.
>> Go ahead, Sean, do mind?
>> I was just going to — all right.
>> (inaudible).
>> Thank you. I want to mention that I think ASR, like Josh, said, I agree, every company out there, I tested it, I like to — with my team, experimental process and we test things out. And one thing I have noticed is that ASR can be great for videos that are in post-production that haven’t been put out yet. There’s a way to speed up the process, you know? But it definitely needs a — to review it and to make sure that everything is accurate, because, for example, ASR is not going to — you know, if John’s son comes up, it’s not going to recognize that it’s a John burgundy — that’s something that a human has to implement in there. And the second thing is, I think that ASR in my opinion — captioning is awful. It doesn’t matter who it is, it’s just not great. And as a Deaf person, like when I see, like, automatic ASR live captioning, I turn it off.
>> Yeah. I understand, Sean, what you mean by that, absolutely. Was there anybody else — we can move on to the next question, yes? Has does everybody feel about that?
Heather, did you want to say something?
>> I was going to say, really fast, going back to different types of ASR, there are straight ASR, which is what you see on YouTube, and they talk about assisted ASR, even with those two different variables, there are a million different providers. We have seen, this is TV, not streaming, but we have seen a lot of local news stations that have trained their anchors to speak very slowly and very clearly, use ASR engines or boxes that have also been trained with proper names, I was, I’m still not a big fan of it, but it’s a lot better than I thought it would be in some of these markets. It’s a significant saving for some of these TV stations, TV stations are, throw the box on the air and don’t train or don’t teach people to talk slower, they are going to have a problem. There are five or six companies out there creating different ASR engines that are in use in small markets, largely. We are seeing more of it in streaming, and I did want to point out right now that the Olympics is streaming 7,000 hours of content and only 2,000 hours is captioned by humans, 5,000 is ASR. If anything is hard to caption, it’s sports with ASR. You have noisy swimming pools, you have noisy anchors, you have anchors with accents, people talking over each other. If you want to, a good example of it happening right now in the streaming world, I would advise you to look at the Olympics, the highlight reels, you can take it’s ASR, no punctuation, no speaker change, lower case, not all upper case, that’s a good case you can see ASR and stream right now to millions of people.
>> This is Rikki again. Yeah, so, as a YouTuber, both as a creator and as a viewer, from a viewer’s standpoint, ASR has gotten better in terms of words. One of my most — example is from years ago, talking about how awful automatic captioning was, I make up a video, about concealers, liquid coloring by the eye. When I was reading the captions, stiff concealers, it would say Zebras, the animal, two different things, I don’t think anyone wants a zebra underneath their eyeball. That’s — that’s just me, though. Sounds painful. Years later, the words, as long as you have good audio, you have a good mic, no background noise, stuff like that, words have gotten better, grammar, that still needs to be worked O I know people at Google who say they have been working on it, and it takes some time. In terms of, as a creator, more, as I said, I tend to outsource my captions anyway. Now I’m trying to do more content, Instagram, going back to TikTok, I did TikTok for a month as a business deal and I stopped because captioning was anything but a pain in the butt. As someone primarily an oral Deaf person, it would be hard to caption what I’m saying. This was 10 years ago when I had better hearing. Could possibly be doable. Now my hearing has gone very, very — I can’t really understand myself anymore, so, I started testing out the TikTok auto-captions, maybe now I could come back to TikTok, sometimes I sign video, sometimes I don’t want to, because I don’t know the vocabulary. The process is easier now, you can take the automatic captions and fix it a little bit, and I can, I may not have been able to understand everything that I was saying, but now I can get a better idea, you know, like most of the words are right, I could try to read my own lips, here is one word, I know that because now I remember the context of what I was talking about. So, there’s good things and bad things, but it does help a lot, if you can outsource your own captions, use ASR, whatever, go back and edit those, that’s going to be so much faster than typing it all out from start is to finish, especially YouTube love to refresh itself and lose progress. Use the automatic captions — okay. Josh?
>> Yeah, absolutely.
>> So, I would say, like, Heather is saying, ASR is good for some uses and not good for others. And so what I really encourage is, not looking at different categories of captioning, in a monolithic way of good and bad. For example, ASR is really good for single speaker environments with high-tech audio. So, if you have a single speaker environment where you have really expensive microphones and really expensive sets and all of that, I have been having — where I have done a live captioner next to an ASR, which — they have been about the same, sometimes the ASR is actually better, because, like, Heather was saying, once you train it for specific words, it knows those words in a way that a live captioner may not. So, that — positives and negatives, in high noise environments, there are negatives in, like YouTube when you are uploading audio files that are not high-quality audio files, we all think that we can make amazing audio and video off of our phones, but it’s just not the same as if you are using a high-quality HD camera with high-quality audio. So, I just — as you are looking at it, it depends on what the cost that you can bear, because ASR is much less expensive, what’s the cost you can bear, as compared to the scenario that you are working with for the video. So, I think it has positive attributes, it’s a lot better than five years ago and I think it will be a lot better in five years than it is today. Meredith owns a live captioning company, so we also think that there’s always going to be an issue for live captioners, there’s always going to be environmental and live captioners that are the best and most important in loud sporting events and loud news events, the loud captioners are going to be the best option, but you do have to weigh the different scenarios.
>> So, with YouTube, the captioning, it would leave swear words, I have known that it’s gotten better over time. So, now moving on to the next question.
Now, does all of the Web providing captioning? Do you mind saying that again, the website, on the internet, is it all caps? I know that there are some websites that don’t have any captioning and there are websites that do caption. And then to add, captioning, if anybody knows anything about that. Any comments? Any thoughts? Joshua? Go ahead.
>> I think Google’s new extension, the captions are really cool because I like to revisit old music videos that I enjoyed growing up, and just — like groups and interviews that, I just enjoy watching that kind of stuff. The majority of it is not captioned. A lot of music is not captioned, I’m talking about videos from the ’60s, the ’70s, ’80s, I got in groups that I grew up, being exposed, by my parents, so, this Google chrome thing is awesome, but sometimes I have to turn it off because it will start doing double captioning, so, I’m sure I can, in due time, Google will keep improving that feature and be able to turn it off and on is easy, like I tune in the caption on and off on Zoom.
>> Yes. And Joshua?
>> This is Joshua. This is Joshua, I have a follow-up with Sean, I agree with Sean completely, it just depends, as we talked about it earlier in this call, the CVAA, the statute that applies to online content requires captioning for content, that previously appeared on television. Because that’s where FCC’s jurisdictional — for a lot of that content, the CVAA is requiring captioning and I think — all indications are those companies are doing a good job with ensuring it. But at one point, I heard a stat that every minute, a million videos are uploaded to the internet, and I don’t know if that is correct or someone just pulling it out. But whatever it is, a lot of videos are uploaded to the internet every minute. We don’t live in a world where there’s — networks and that’s it anymore, anyone of us can be a content creator and often are content creators, even if it’s a video of my kid’s school, whatever it might be, that’s a video on the internet, right? So, one of the things that are interesting, when you are surfing is trying to find out what providers are taking that step to add either subtitle or captioning, so, for example, I like standup comedy, and I found one online standup comedy site in FaceBook watch and they provide subtitles. So, I can watch that and get the standup comedy from that. But another standup comedy comes up on my FaceBook feed and I don’t have subtitles and I move on, I find to be the sources at the moment, but I do think there’s a good number of sources out there that are findable. And that kind of goes to what we were talking about with. ASR versus human captioners, I’m not even sure we have enough human captioners who could caption all that content, just from the requirements of their very sophisticated, they are very sophisticated people and they can type faster than you can ever imagine, from a resource perspective, what is that balance of, what type of video on-line is getting captions, even though it’s not an FCC rule, and then, you know, finding the providers who are doing a good job — giving them their business, that’s what I have been doing and my family, giving the providers a — good job — views — the eyeballs.
>> That is the part that is missing. Moving on to the next question, I want to congratulate Sean for being awarded the Z pen. I really do enjoy watching that network. Now, how would you explain a little bit about that success with the presidential debate and what did that involve? What did that procedure look like?
>> Back in 2016, I was running a news program called DTV news. A huge competitive. Vickie, you used to — with us, I remember back in the day. And during that time, really — since 2006, I have been focused on making music accessible. Through that journey, I started realizing a lot of things are still not accessible for the Deaf and Hard-of-Hearing community. So we started producing news, putting out content, just really providing the Deaf community with more information in their language, sign language.
So, in 2016, I remember I had sort of talking about the presidential debates, and I went out of town for performance or something, and I flew back in, and the debates were literally like the next day or two days later. My team was, like, we have to do this, we have to. So, that was the first one, and that was three debates between Hillary Clinton and Donald Trump. And that was an incredible experience. Each live stream viewing of that, we probably got about 500,000 people watching it, which, to me, is proof that the Deaf community needs access to the — the Deaf community wants to be involved with politics, the Deaf community wants to have a voice. And, you know, just like anybody else. Up until to point in time, you never had the ability to watch and view, we often depend on the news, depend on our family and friends to tell us political views. And for me, it’s always been important that the Deaf community has their own voice, their own views, their own opinions, rather than being injected by your peers, because I have met, when I was 18, I voted for the first time, and I was, like, okay, who do I vote for? Of course, the first person I go to is my parents. So, mom and dad, when should I vote for? I was just — people — doing you a the — I wouldn’t say I was incapable, I was lazy, okay? I was lazy.
But the fact of the matter is, is that it made me realize that the Deaf community really needs to get their — in their own language. So the second time in 2020, it was like, we have to do this again. What was interesting this time as opposed to last time, the first time we did it, we have relied on the captions from whoever the provider was, mean, I know that the presidential debate was produced by the presidential debate committee and often aired on all of the TV platforms. So, like we took the approach of using, I think, C span and their captions were not that good. So, this time around, I actually hired my own captioner, and I told them, said this is for the Deaf community, don’t mess it up. Everything must be perfect. And everybody is going to be watching it. And, you know, it’s really — I hope that this responds a new generation of Deaf politicians, a new generation that Deaf people involved with government, just really opened the doors for the Deaf community to be involved in this process. So, thank you for giving us the — I want to thank my team, for really making this happen.
>> Absolutely. Congratulations to your team for doing such a great job.
So, Joshua, I have a question for you. Now, when you think about our industry, when we think about promotion, raising awareness, accountability, about there not being enough captions out there, what can we do to enhance the quality of that and to get people on board?
>> I think a lot of it — I’m sorry, it’s a very good question. I think that one of the interesting things is there’s always been a focus in discussions like this of, well, the FCC requires, or the law requires. And I do think that what was successful with the Amazon and the Netflix and the Hulus of the world was eventually showing, I think Sean mentioned this, the competitive benefits of including accessibility, especially as the population ages, and, you know, more people are using accessible features, whether that being captioning or screen readers or even just enlargements or things like that. One of the things that — I know that from our captioning company, that, our sales, so to speak, the people who have come in as new clients recently, the new clients, some of them were, you know, 504 required because they were state or local governments. And so they had a DOJ requirement for accessibility. But a good number of them were just because they realize that a large portion of their audience was turned off. And that they were missing out on a competitive benefit. So, one of the things that I really encourage is the explanation of the benefits. I go back to when I was in college, I went to the University of Florida, the foundation for gator nation, I hope there are no SFU fans here. When I went to the University of Florida, we had a pep rally, and as part of the pep rally, I asked them to include captioning on the big video screen. And we were able to do it. It may have even been VITAC, I don’t remember who did it, but we had captioning. And my sisters went to the University of Florida four years after me. Even though nobody requested it, they kept doing it because they found that their older alumni loved it, they thought it was great, it was because it was so loud and difficult, they could look at the screen. One of the things that I find, when I’m working with content providers, is, they want their content to be viewed as understood. And there’s a balance between cost and benefit, and when you are talking about things like the difference in cost between ASR and live captioning, that’s going to come into play, but I do think that if you can approach people with the concept of, this is an audience that you can better serve, that seems to resonate far more than, well, you are required to do it. Because you are required to do it as 10 pages legal memo about what is required under CVAA and what is required under the FCC’s rules, it gets complicated and keeps lawyers like me employed, but other than that, I think really just selling the audience is really the way to go. And I know Heather deals with a lot of content people, too, maybe she wants to add to that or expand on that.
>> So, this question is for Rikki. Now, what made you decide that term craptions, what inspired you to come up with that terminology?
>> Okay. So I did not actually come up with the term craptions, that’s been around forever. Before I even thought of it. So, it was kind of like, I heard about this, let’s get together with this and that, and that was just calling it craptions. The only downside is, sometimes people will still read it as captions, when you say no more craptions, people don’t see the R, and then they go, no more captions? Doesn’t that go against everything you just said? Ma’am, sir, person, I’m going to need to you look closely, that’s not what I said.
(Laughter).
>> Yeah. So, but, yeah. It’s just been a term that’s been around, honestly, it just comes off the hand of the mouth really well as long as somebody is reading it properly.
(Laughter).
>> Okay. All right. Thanks for that explanation.
Wonderful. Okay. I will let other folks who might have some questions or wanted to comment or anything. I see lots of comments in the chat. There could be some questions in there, but I haven’t been keeping track. Let me see if I see a raised hand or — hum. Okay. Let me look in the Q&A. So, I see a question here. Okay. I will choose one of these questions here.
So, captions in gaming, video games in particular.
What can you all say to that? Does anybody have anything to say about that?
>> I can. Yeah, because I have been — I have been streaming — can I help you? Sorry. Simon. I grew up gaming, like a lot of — back in the day, that was probably the easiest game to play because everything was always laid out in front of you. I’m trying to remember back when I was a kid. I always had — we only know — we didn’t realize it until I was 11, anything before then wasn’t — I don’t know what the perspective is on that. I guess, so, but, when I started playing again, beyond like Pokemon, because you don’t need captions for that or you could just look at it, there were some games where either there are no captions at all, so — adventure, the interactive type that I really like, some games don’t come with captions at all, so that was a waste of money. Some games will have the weirdest looking captions ever or subtitles as they like to put on, or subtitles that they like to call it, rather. One of the series I’m thinking of is the walking dead. And what I did like, at least for me, was they would have different colored fonts for each character. So as long as you remember the color, anyway, put it with the character, it was easy to say who was saying what, if you cared about that.
Ken said GTA — I don’t know anything about that.
One thing I have an issue with within games is that sometimes the captions, I wish I could make an example with my hands. But the font is so skinny like each letter is so skinny, skinny, that you can’t really see it, and there will be white font and they will put a black border around it, except for the fact that the font is so skinny, skinny, everything looks like it’s all smushed together, and you can’t really read it. Sometimes it will be signed tiny and they don’t give you the option to make it bigger. Also, sometimes, they will just have plain white text, tiny, to boot, and then there’s no like background, no border around that, but there’s also no background underneath it to give any sort of contrast. And — that really stirred things up on Twitter, I can’t remember, the game itself would be, it would be winter outside or something, and so white subtitles, captions, on snow. And you are meant to — yeah. Sorry, someone is trying to follow along.
Yeah, so, the white text on like snow or something, and it’s like, am I supposed to be able to read this? And even have some time, some games will have, if it’s such a — or something, they will have the captions but the story, the cut to scenes don’t have captions, you want to know the story when you are playing a game. That helps you win while playing the game. It’s just — so annoying. Of course every time it comes up on Twitter, disability Twitter, what’s the word I’m looking for? Just the people who think they are up on the hierarchy or whatever when it comes to gaming, and they are like stop worrying about the games — those games out for you, it’s like, can you all not, please? I don’t know, yeah.
Rikki, please tell Simon — loves you.
>> Let’s see the next question we might have, I think it might have been answered already. Let me think. It says it’s a long one. Um, do you think it’s wise to use ASR, do you all think it would also be wise to apply the same ASR? A lot of times we have broadcasts on television state who provided the captions, closed captions provided by the Department of Education. Do you think that would be helpful? What do you all think? Sean, go ahead.
>> Um — might answer that question, but as someone who is very involved with the interpreting community, the captioning community, I have noticed that a lot of these FCC restrictions and other restrictions are in a lot of ways restricting the advancement of technology, and the ability to work with them. Because like right now, Netflix, who lieu, and everybody — they are really on top of accessibility feature, they know that the customers want that. With absolutely no, oversight added. So, in some ways, it’s like, I think we really need to be our own best advocates, and we need to call out those that are not providing it, rather than putting restrictions on it at the federal level right now. I think maybe down the road, in a couple of years when the technology gets to a certain point, maybe we should implement restrictions. But I think right now technology is rapidly going. And I wouldn’t want to put any restrictions on that.
>> Okay. And so let’s see. There’s one more question. With the lack of uniform standards across the board for Web captions, despite WCAG guidelines, in your opinion do you think there should be more regulatory oversight on Web captioning? Is there anybody that would like to answer that question? Or Heather?
>> I think Sean answered that question pretty well. There is a lot that’s going on. The thing is, it would be nice if there is a standard, but it’s not necessarily possible to apply a standard. For example, Amazon prime can only put captions in the middle of the screen on the bottom. Netflix wants captions to be placed next to the people speaking. The definite players have different technological requirements so you can’t apply the same rules or regulations, maybe you have if you aided regulations and you had to accommodate all of this, it would work, I feel like the world is too vast to try to reach that point, to Sean’s point, where they are doing a lot already and on top of this sort of information. It’s a great idea, but it seems like too big of a hurdle to me.
>> This is Josh, I want to add to Heather, if there’s — I agree with everything that Sean and Heather said, and just want to add that there are a lot of different kinds of content creators on the internet. So we could talk about Amazon, Netflix, and Hulu, but even talking about gaming companies or talking about even your local School Board meetings or whatever it might be, you are talking about a lot of different levels of sophistication and access to resources. So, that gets it really hard to have standardization across the board. It really goes to what I’m saying, if there’s something new, like, and you don’t think they are serving you, that’s the opportunity to reach out and say I don’t think they are serving us, and there’s an opportunity to serve us, that’s — I look at it from that perspective of — actually, when the captioning rules came about, they call it 10 places where you could get video content. The big four networks and ESPN and CNN and a couple of other cable networks, I know I’m aging myself a little bit. But now my kids, which, from one platform to another platform, from one creator to another creator, you know, the content of a program guide is foreign to them because they just go wherever they want to go to get that content. That makes standardization really hard. But it does also mean that you can vote with your eyeballs and vote with your feet to promote that better content for everyone.
>> Yeah. Okay. I think that was it. Those are — we are going to have to close now or wrap up. Any last words? Anyone?
>> I just want to say it’s a pleasure to be on this panel with all of you and talk about this, captioning is always an exciting topic. And for my organization, the only reason that I got into captioning was that I noticed a lot of Deaf people complaining about captioning. I thought to myself, instead of complaining about it, do something about it, get involved. So, we have an amazing team of Deaf captioners and, what we do is completely different from audio captioning because we are having to translate in the back of other heads and figure out what the Deaf person is signing and how to put it in proper English. So this journey is fascinating, I look forward to more discussions with all of you.
>> Okay. I think one last thing?
>> I just want to say real quick, along with Sean, I’m very happy to be here, if I can complain about captioning, I — do something about it. I’m always here for you to talk about my perspectives and things like that and I get to new people who are awesome and I get to work with Sean again for an hour. I appreciate it.
>> Okay. Wonderful. All right. So, that’s it for today. Thank you so much. All of you. Great job. Thanks so much for being with us today.
>> Have a good day?
>> Bye-bye, everyone.
cc: IPCTS (breakout)
Cristina Duarte, Linda Kozma Spytek, Cre Engelke, Erik Strand, and Matt Myrick
Transcript
>> MATT MYRICK: I want to introduce my name and so good afternoon. Again, my name is Matt Myrick. I am wearing a blue polo shirt that says TDI on it. Next, I will introduce Christina Duarte. Can you introduce yourself?
>> CRISTINA DUARTE: My name is Cristina Duarte. I have pretty long brown hair and a string of pearls, a light pink jacket, and red lipstick. I work for InnoCaptions and I am happy to be here today.
>> MATT MYRICK: Next will be Dr. Linda Kozma Spytek.
>> LINDA KOZMA: Yes. I am Linda Kozma Spytek. I am an audiologist in the access program at Gallaudet ask co-direct the hard of hearing technology rehabilitation engineering research center. I’m a woman with shoulder-length graying hair, blonde hair, and have a white shirt, a necklace, and a pink jacket on. Thank you.
>> MATT MYRICK: Awesome. Thank you. Next is Dr. Cre Engelke.
>> CRE ENGELKE: My name is Dr. Cre Engelke. I am a man with a ponytail, a blue shirt, and a jacket with very impressive elbow patches on it.
>> MATT MYRICK: Awesome. Thank you, Cre. Next, we have our next panelist: Erik Strand is the founder of MachineGenius. Erik, can you introduce yourself?
>> ERIK SRAND: Sure. Thanks, Matt. I am also a man. I also have COVID-length hair. I just really have elbow patches on my jacket. I’m the founder of MachineGenius. We are the makers of caption calls which is an ASR-only ITCTS provider.
>> MATT MYRICK: Thank you all for being here and I will go ahead and start our panel question and then the very first question that I have for the panel and this is the question for the entire panel. Can each of you provide the background on the work that you do? Then we’ll start with Christina. I’m sorry. Linda?
>> LINDA KOZMA: Sure. At the technology access program, we’ve been doing work related to speech recognition and IP-CTS for probably eight or so years since 2013 when the emergency order came out from the ICC. So we’ve been doing a variety of kinds of work. Some to gather consumer opinion about IP-CTS and other work has been around the user experience in terms of using IP-CTS. I know, Matt, that there was a great presentation earlier by Larry Goldberg on ASR, but I didn’t know if it made sense to maybe just mention what automatic speech recognition is at a very high level before we continue on with describing the work that we do. So if that’s okay, I’m going to continue.
Speech recognition is really the result of computer hardware systems and software-based techniques that are used to both identify and process the human voice. It’s used for a variety of purposes. We are talking about the use for converting words a person speaks into text, but it is also used to perform actions based on instructions that are defined by a human. And it can also be used for authenticating users by their voice alone. So speech recognition has a variety of uses. We happen to be talking about one that’s primarily about converting the spoken words into text. I think one thing that always interests me is the history of this. So on speech recognition, it has about a 70-year history, 7-0. So it’s actually been around for quite a while. It’s been in the last 5 plus years where we have really seen some huge leaps in technology primarily around the use of machine learning, artificial intelligence and data sets, and the use of algorithms that permit learning from those big data sets. So I would say while it’s been around for 70 years, we have only seen huge gains and probably the last 5 to 10 years. So I don’t know. Maybe someone else has something to add to that description or we can go on with the introductions of our work.
>> MATT MYRICK: So next will be Cristina to elaborate some of the work that you do.
>> CRISTINA DUARTE: Sure. I do regulatory compliance and I’m also doing house council. So I advise caption on everything legal. We’re a small company so we wear lots of hats. I get the opportunity to work directly with users at exhibits and help out with customer support. We at ENO caption have a hybrid method of providing captions and as we’ll get into that, it can mean a lot of things, but what we do is we allow our users to choose between using stenographers or automatic speech recognition and they can switch back and forth on a call. So part of what I do is help with the testing and speak directly with users about this technology. And get to see their feedback and what their experiences are.
>> MATT MYRICK: Okay. Welcome. Thank you. And next will be Erik. Erik, can you elaborate on some of the background work that you do?
>> ERIK SRAND: I was the founder of MachineGenius. We have a lot of captions calls. We were certified as the first ASR-only ICTS provider early or middle of last week. I appreciate TDI letting us participate in this panel. Like Cristina and by the way, Cristina, congratulations on the award yesterday. That was awesome.
>> CRISTINA: thank you so much, Erik.
>> ERIK SRAND: They work on recall than running the company — they work other than running the company. Heavily involved with users and customer satisfaction, gathering user feedback, trying to turn that into innovation makes the caption calls a better product.
>> MATT MYRICK: Thank you, Erik. Dr. Engelke, do you want to elaborate?
>> CRE ENGELKE: One of the core pieces that made that possible was the advancements in ASR like Linda explained earlier. So CapTel, which is the captioned telephone uses a human and ASR combination. Where the human generally speaks or uses the ASR to produce the captions at a speaking party at a fairly rapid pace. My research or my work in this really comes in two areas. The first being research wherein I work with users, I actually do quite a bit of experimentation. We enroll volunteers, show them captions in different ways and produce captions in different ways, elicit different types of feedback from them and spend a lot of time looking at ways to — ways to hone the measurement of the — of the responses and look at how errors are counted and how different types of changes in the captions might influence their experiences there. And then the second part of it like Erik was saying, the second part of what I do is the development side. We take what we have learned in the research side and experiments and all the literature and everything else and folds that back into our products whether it’s on the front end and something that the user sees as a feature or in the back end that improves accuracy, speed and the like for the users overall.
>> MATT MYRICK: Awesome. Linda, looks like you have your hand up. Do you want to add anything else to this background? No?
>> LINDA KOZMA: No. Sorry. If I look that way, it was unintentional.
>> MATT MYRICK: Awesome. We will move on to the next question. When opting for an IP-CTS provider. Can Linda — Linda, can you identify who those IP-CTS providers are?
>> LINDA KOZMA: Sure. So I think, um, there are seven providers and I know, Erik, you mentioned that you are conditionally providing all 7 providers are conditionally certified for IP-CTS and anyone who would like more information, the FCC actually has a website that you can go to. So a page on their website about the IP-CTS providers you can see the names and then links to their websites. But they include caption mate and LELO which are two ASR-only providers currently. We have ClearCaption, Hamilton CapTel, Caption Call, T-Mobile and we have ENO Caption and Cristina described that for us. All of these services and providers use ASR to some extent and Cre, I think you were describing a hybrid model, which can look different depending on how you implement it, but you were describing something that was traditional where there is someone listening to the calling partner of the IP-CTS user and their read voicing into an ASR system, which is generating captions. More recently, some of those longer-term providers are now getting into ASR-only captioning. And using both ASR and hybrid models and, of course, Cristina mentioned ENO captioned also has a stenographer that is generating captions. Some of the providers allow for people to choose whether they receive those captions via ASR only or some other method if that’s an option with that particular service. And then others will do that switching. Most recently, people have been doing it on overflow calls, but it looks like that it may be the case that some companies are moving to use it more than just for overflow calls. Providers as I mentioned and to recap to those that are ASR only. All providers use ASR in some form and many of them use a hybrid model of some sort.
>> MATT MYRICK: Very good. Thank you, Linda. Can somebody from this panel elaborate more on the revoicing? I think our panel audience would like to understand more about how the revoicing and the integration with using ASR with those two technologies together, how did that work? Cristina? Cre? Does anybody want to respond to that?
>> CRISTINA DUARTE: I’ll let Cre talk about assisted technology.
>> CRE ENGELKE: I’ll be happy to talk about revoicing. When CAPTEL was originally developed, the obviously what to provide generated captions was to train the ASR to a particular voice. So you had what was called at the time speaker-dependent voice recognition technology. And so individual captioners would spend quite a time training ASRs to their particular voices. As for the forms of the relay, they would listen to the voice of the speaking party or the party who was going to be captioned and they would repeat it word for word into a speech engine that was trained to their voice. Now we continue to use this technology along with many others that have been developed along with the way. There’s a whole host of tools that have been developed both by our company and are the companies I know have expanded from revoicing to include all sorts of mechanisms. But that is respectively how revoicing works. The person hears — the CA would hear the voice of the party to be captioned and repeat everything that’s being said into a voice recognition engine that has been trained specifically to their voices.
>> MATT MYRICK: Thank you, Cre. Is there anybody that wants to continue to that or is it the same for all IP-CTS users and the technology itself?
>> CRISTINA DUARTE: I think revoicing, Matt, is the same to a certain extent with the older legacy providers that have Caption calls, CapTel and T-Mobile, and Hamilton. I can’t say exactly how their technology works, but revoicing is what they’re used to.
>> MATT MYRICK: Any other questions for the panel? If your experience — in your experience — over to Erik. Erik, can you elaborate on that?
>> ERIK SRAND: Sure. Thanks, Matt. So our Rays on Detra that is my company Machine Genius and the LOLO caption calls product that we produce is the very idea that ASR can outperform human captioners in the majority of situations. I think what’s fairly clear is that ASR can outperform communications assistance in terms of latency. So certainly if ASR can produce captions on a pro word basis that have a lower latency that is time between when the word is spoken and the word appears on a user’s screen or device is less than 2 seconds, that is fairly clearly better than what’s achievable by communications assistance. But also I think there’s strong evidence to suggest that, um, in the nominal case, in the baseline case for a good call, ASR actually outperforms in terms of accuracy. We have seen this evidenced in MITRE findings, in third-party findings, which is not to say that in every case, ASR will outperform the accuracy of a human communications assistant, but in many cases it does. And we get that feedback all the time.
Other things that recommend ASR are privacy. So it’s not just possible, but actually in fact the case, and Cristina, no doubt will attest to this. There’s simply no third party on a call when you work with ASR. At least the way our systems are constructed. So even though there is a cloud-based ASR provider on our calls, there’s not anyone listening to the call. Those cloud-based ASR providers are prohibited via their own privacy policies from saving any of that information or from using any of that information. Of course, we internally don’t have any access to what’s happening on those calls again by policy, anything is possible. Committing to the FCC regulations, of course, we have no visibility into anything that is said on a call. Other things that confer ASR are our availability, so there’s no reason at times of high thru-put to have to wait on a CA to be available. So ASR is available. It is more or less scalable and likewise on long calls where the FCC regulations permit switching out CAs every 10 minutes. And ASR-based calls can continue indefinitely which is really helpful for a very specific class of calls like unfortunately waiting on hold for social security or COVID checks, things like that. It turns out to be very useful. But in fairness, we should talk about weaknesses. Larry earlier, you know, I think outlined this very well. When there are heavy accent issues or background noise issues or other call quality issues where the signal of the call is harder to distinguish from the noise, yeah; for sure. A human can transcribe that, can caption that better than ASR today. With that being said, there are advances being made. Forgive me for running long on this, but it does cover a lot of ground. I watched a video, a lecture online that said for background noise on phone calls, there are only about 160 noises that people care about. ASR engineers can figure out what those are and we’re moving in that direction so that non-verbal cues about what’s happening on a call can be communicated on a call. And likewise, there’s a certain amount of trust among folks used to actively transcribing a call versus a machine doing it. And we understand that. Some of the hesitancy to use ASR for these purposes is based on I’m not really sure it works. So I think those are the two key weaknesses. Thanks, Matt, for letting me run a bit long.
>> MATT MYRICK: Thank you. And next Linda, what can you elaborate on some of the benefits and weaknesses of IP-CTS?
>> LINDA KOZMA: I concur with what Erik has said with regard to benefit. I’m also thinking about the concept of being able to do the multi-language translation. I think that ASR has the potential for being hugely beneficial in that area. Many providers do English, Spanish, IP-CTS, but there are many, many other languages. And I think that that’s one place where ASR, potentially be hugely beneficial certainly. I guess the other thing that I also think about is that the captions are provided to the person who has hearing loss is using the service, but there may be some ways to, you know, provide captions to maybe the hearing person to be able to even monitor what those captions are. I think that’s much more when you have something like ASR and a machine-driven. Certainly, the issues around weaken of ASR are the same that people have and some — and in some respect, speaker characteristics as Erik mentioned when you don’t have clearer speech or rate of speech that’s too fast or maybe in some cases, have some overlapping speech because they’re speaking in the background. Accepted speech and all of those things can be problematic, but I think the difference is that the machines are learning over time and they’re pretty good. They’re learning and they’re pretty consistent in the results they produce. Consistency and reliability with ASR and machine learning is something that is important and Erik, I was really interested in your comment about trust. And mostly because this issue is not something that’s specific to automatic speech recognition in terms of gaining trust, it’s simply that humans judge mistakes made by algorithms more harshly than they are just humans. It is giving directions. All those sorts of things. They have this human bias against algorithms. So the more experience we get with those things, the more trust will be gained. There are also going to be edge cases and Cre, you mentioned how ASR was speaker-dependent and we’re gaining much more speaker-independent so we don’t have that issue nearly to the same degree, but there are still those edge cases where you have people who may have speech that’s not typical and there is definitely work going on in that area for gaining data about those other types of accents or speech characteristics and the machines are demonstrating pretty amazing results, in my opinion. So it’s interesting and fascinating to see what happens with ASR.
>> MATT MYRICK: Thanks, Linda. Next, we have Cristina. Cristina, can you elaborate?
>> CRISTINA DUARTE: Sure. I concur with what everybody said with automatic speech recognition where it works, it works really well. It’s fast. It can be accurate, but it depends so much on the speaker. And, you know, so I had here things that both Erik and Linda hit, the speed, that it’s verbatim the consistency. Where ASR sometimes falls short is the inclusivity of speech patterns. I have parents who are hard of hearing. I was told to enunciate my words and speak clearly, my voice processes wonderfully on automatic speech recognition. My captions are fast and they’re great. Now, you have my dad who was born in Portugal and born with a profound hearing loss and ASR gets so lost sometimes. Those captions are embarrassing. The way that ENO caption developed our hybrid switching feature was our co-CEO Joe was testing the platform before we put it to market and he made a call to an airline. The beginning part of the call was really great because it was a computerized system. Computers understand computers and after a significant hold, he was transferred to a representative who for one reason or another’s voice didn’t mesh well with automatic speech recognition rendering the captions useless. At that moment, he was in a situation where he could either ask a third party for help on the call, which as we all know renders accessible technologies for that particular call or, you know, hang up. Disconnect and started the whole call over again and that is where the idea for this in-call switch was born for lack of a better term because he had that experience. And I’ve had the opportunity to test out speech recognition and I lot of the time, I think I’m getting pretty good at hearing somebody’s voice and being able to put money down is and say no. This person is going to process well or this person won’t. About 6 months ago, I did have a little bit of a surprise where I would have lost money if I was betting money. It was a presenter of a webinar who had a really clear voice. Female speaker. Mens’ voices tend to process better, but her voice was very, very clear kind of like mine and I had the captions on and I was very confused as to why her captions were not good. And then he realized she was nervous. So her voice was shaking and I guess the automatic speech recognition platform didn’t recognize what was going on. So don’t get me wrong. Yeah. There are weaknesses, but it is a great technology. As with what Linda was saying, when I saw a caption made and LELO coming into the market with caption only, I got very excited because there are whole segments of the deaf and hard of hearing community that is still stuck back where my parents were in 1992 with me answering the phone. I still remember I can help you? That was just, you know, how many parents took their calls, but people who don’t speak English in the United States are still stuck in that position. And ASR creates these wonderful possibilities for other segments of the community and other segments. Erik was saying it is incredibly fast. So I am very excited about the possibilities. With that, I will hand it back to you, Matt.
>> MATT MYRICK: And Cre, what can you elaborate on?
>> CRE ENGELKE: I am trying to think if there is something for me to say. I feel all the thunder has been stolen. You guys have made great points. The speed, the low latency is terrific. I think the capacity, the general capacity for high volume, call traffic is unrivaled. It is obviously nearly infinite. I think, you know, high accuracy under ideal conditions is uncontested. To Christina’s point, yeah, if you speak perfect SAE, I mean, look. I love it. I’m the most unmarked category because, I mean, like Christina, I speak white middle-aged, middle-class male able-bodied, middle America educated English. ASRs eat up it. And the further you get away from that model, the more trouble you tend to have. It has bench documented in all sorts of studies. Larry talked about this in the last session. That brings up another issue for me with respect to Linda, what you were talking about in terms of reliability and consistency, and Erik, what you’re talking about in terms of testing that’s been done here. And so I make the distinction with the consistency of saying if you play the same audio to something, are you going to get the same text output versus reliability meaning if you play 100 different voices and 100 different contexts, what does, you know, what do you do with all those different outputs then? And my concern coming back to this question, this — I guess what I’m saying is the consistency is a double edge sword. If you have — if you play voices in and you test and train ASR on voices like mine and Cristina and Erik’s, you end up with a particular model of speech that it does extremely well on, you are priming it to not do as well on voices that are different. What you understand up with then is the potential for a systemized process for ageism, racism, sexism, ableism, regionalism, whatever IBMs, whatever you’re not testing and you’re not teaching toward, you have the potential to bring about. Whereas if you have other systems, the variety may allow to you switch in and out of it. So Erik brought up a great point. If you’re sitting on hold for hours and hours having a consistent service that you can learn the ins and out of, I think it can be a very powerful tool. But knowing that if you need to cycle out of that, you need to go to something else having a variety of different models to choose from is important as well. Again, to Linda’s point, these are things that are evolving. I will not sit here and say it’s not a moving target. This isn’t something that isn’t continually getting better. I don’t want to be — I don’t want to come down harshly for or against it. I think there are pros and cons and on both sides, they’re really important. So maybe I just did a great job of summarizing all of your points, guys.
[Laughter]
>> MATT MYRICK: That’s great. Thank you, Cre. The next question I have here for the panel. This is a question for Cristina and Erik. Can you tell me more about the privacy and confidentiality of a call content when it comes to ASR with IP-CTS?
>> CRISTINA DUARTE: I will go ahead and take it. You know, I want to start out by saying that when we originally released ASR, we got so many e-mails from users who are very concerned because I think when people think about ASR, they automatically immediately think of machine learning. And your information is stored in the cloud and then what happens to that. And luckily with just like all IP-CTS services, confidentiality is key. We’re all regulated. We’re not allowed to keep transcripts beyond the duration of a call. I know for us and we have it publicly on the record when we use Google with our own proprietary sweeps, I assume that other ASR providers have these kinds of options. So while yes. ASR does machine learn. It doesn’t happen on IP-CTS calls. These platforms do it in other ways. It is 100% confidential and private. So users don’t have to worry about that. Again, we are regulated. Erik, is there anything you want to add to that?
>> ERIK SRAND: I just add the following. I will reiterate it is 100% private. No one has any access to what happens on any conversation including our ASR provider. I will say also that in my experience from working with our users, they almost don’t care about machine learning. They really just care about how effective this is. And that’s far and away from the feedback we get occasionally we get someone who asks, you know, how is this being generated, but equally often, we have ASRs say it tends to perform better. Are you using ASR or CAs without real regard for privacy? Although we are regulated about it, there are ways for the DNN. The ASR needs to get trained and users care about what’s effective and like many online services, again, everyone here is 100% compliant with privacy. They really want it to be effective even if they were to say privacy doesn’t matter that much to me. Just make it as accurate as possible. That’s my experience.
>> CRISTINA DUARTE: And, um, you know, and Erik, I wonder if that’s been the experience that you had because being in ASR-only provider, obviously you and I have had this discussion. As an ASR-only provider, the choice is inherent because people speaking out of the platform, want to use an ASR-only platform. Our user base is more gearing towards that wanting the stenographers and the ASR and then trying to figure out because their baseline was human-assisted and they understood what they meant for their privacy. I do think that the — both of our experiences are very much demonstrative of different segments and how they feel. I am trying to explain why maybe Erik is seeing a different reaction from the consumers he deals with as opposed to the ones they hear from.
>> ERIK SRAND: Let me echo that. I think Cristina is right. One of the things that I’m sure Cristina and I agree on is consumer choice is paramount and lets the user decide what’s best for them. Of course, we promote ourselves as ASR only. We will find people that are more attracted to that kind of service and likewise, people who want a hybrid service might gravitate towards caption.
>> MATT MYRICK: Because of timing, we will move on to the next question. I will separate everyone into two groups just because we have two people that focus on the areas of research and the other one, you know, more consumer-facing providers. So the first question that I had was for Cre and Linda. How are researchers evaluating the quality of ASR captioning? Cre?
>> CRE ENGELKE: Thank you. Sure. So as I said, we do quite a bit of research in this domain for both, you know, to make sure that we’re providing users with the best possible services. But also we do quite a bit of research that we share publicly in terms of how to negotiate and study these things. And this is something that Larry kicked off his presentation with. I will start with that.
So one of the big things we look at is how do you measure the weight of an error? There’s been a lot of discussion about this in different areas to say, well, so let me start off by saying accuracy is really a discussion of errors which seems silly because it is the inverse of accuracy. But what you’re counting is how many times the ASR got it wrong. So one of the biggest things we look at is what makes an error wrong and what’s the impact of that error on the person using the system? So for example, if I were to say I’m going to go to the store today and the captions came out I’m going to go to the store today. You’re likely to read over that and have no problem. However, I will hippopotamus a hippopotamus, you will say what? And that’s going to change the direction of our conversation. Likewise, if I say I’m going to go to the store today and the captions come out I’m going to go outdoors today. You will read that and it’s going to look and feel and smell and everything about it is going to look correct and it’s going to change the rest of our conversation. But the captions are going to look perfectly right and we will have to figure out what happened and what wasn’t wrong. Now Linda brought up a really interesting and excellent point here which is that we tend to judge some types of errors more harshly. As Larry brought up, there are some words that ASRs like to insert that are at times comic comical and I’m talking about the no-no word list. And but if you look at the ways that so many ASRs are trained, these things will come up more and more regularly than maybe we’re anticipating. All this is to say we’re spending time looking at the impact of the errors and how they influence is and what they do to the structure of the interaction and how do people deal with them and what does that do to their experience of the telephone call. And then we work backward from that and try to develop tools for this. So we’ve been involved in a number of projects that are not just ours and not just focused on developing and for us we have been working with the join providers group now for about 3 years helping them develop what I think is probably going to be the most robust research on IPC desk captioning that’s ever been done. I don’t want to say we’re the only one dos doing research here. It draws on five providers who have been certified up until that point when we started the group. So we do that sort of work. We also look at delays the same way. We look at what are the different impacts of delay in terms of how is it measured and what are impacts that it has on the interaction? And then finally, of course, we do the sorts of research that everybody does measuring objective, speed, and accuracy across all these different ASRs so that we can actually get a lay of the land and say, well, this is the one we want to use and this is how we want to tweak it.
>> MATT MYRICK: Linda?
>> LINDA KOZMA: At Gallaudet, we do much of the same research that Cre described. I’m glad you focused on accuracy, Cre, because I want to talk about latency.
[Laughter]
So I’m glad to have the opportunities to go in that direction and much of our work, we have — we have a sub-contract under MITRE. So a lot of work is under an NDA. But with that being said, um, we have recently done some work where we looked at conversations between hearing calling partners and I think the thing that struck me the most about, um, the analysis of those and it’s completely consistent with what’s generally in the literature is the degree of interactivity that occurs in those conversations. So we do things like look at how long a turn lasts and how long of a time there is between the end of one turn and the beginning of another. And it turns out when we looked at hearing calling partners, there was a lot of overlap. In some cases, no time between the end of one turn, the beginning of another, and on average, that time difference was less than 500 milliseconds. So it just gives you an indication of the level of interactivity that occurs that can be — that level of entered activity can be supported in ways with ASR where maybe the delay is on the order of one, one and a half seconds compared to something that might be longer. So I think part of our work, which I find really interesting is looking at other types of telephone calls and between hearing calling partners is one area that we’ve looked into and I think as informed us and made us think about latency in ways that we might not have otherwise if we hadn’t done that particular type of work. So that was revealing for me and I do think in terms of functional equivalence that it is an important piece as well. Thanks.
>> MATT MYRICK: Thank you for that insight. Next, you know, we will go to Erik and Cristina as providers, how do you evaluate the quality of ASR captioning?
>> CRISTINA DUARTE: Erik, would you like to go first?
>> ERIK SRAND: Sure, thanks, Chris Tina. To an extent, we do research that Linda and Cre do certainly not in the same depth that they take it to. Both Cristina and our company and every company that offers an ASR-only solution for IP-CTS right now evaluate the range of ASR providers that are on the market. And I’m pretty sure that none of these IP-CTS providers do their speech recognition in a house with very good reason, which is like let’s leave people who are really good at this stuff to do their work and let’s take from the best of them and incorporate it into our offerings. And so, you know, we go through exercises periodically and there was an initial exercise before we launched our offering in the middle of last year to say okay. Which is the best of these ASR offerings? And that included latency and accuracy and to some extent, although not to a scientific extent the import, the importance of inaccuracies. However, we feel very, very confident that we have chosen the best ASR provider that is available. So what we do today is listen to customers. So we actively ask customers who live in groups that we have asked specifically to give us feedback and also to customers that are just sending us random feedback if you will. I had trouble with this kind of call. On the other hand, this is great. So we have — as commercial entities, I know Cre and Linda had other serious academic work, but as commercial — what’s working for you, what’s not working for you. In our experience and this is not fictional, um, our customers again who may be self-selecting based on the fact we’re an ASR only provider think it is overwhelmingly better for them. It is very rare that we get a complaint or a question about the quality of speech recognition. They may say they have gone out of signal range or they have bad audio quality, but very rarely is it a problem with the actual captioning, the accuracy of the captions, and that’s our principal mechanism right now for making things better is listening to the users.
>> CRISTINA DUARTE: And to echo what Erik is saying about listening to users. I fully agree with him. I think it’s really important that we are listening to the community and the individuals that are benefiting from this service. Research like Cre’s and Linda’s is absolutely irreplaceable and important to understand on a more complex level and for us what we do is focus on what our user experience is. We do the test calls obviously and monitor the quality of ASR not only our solution but on other market solutions to make sure we are offering the best one as far as we believe. And our users have the opportunity at the end of each call to submit a 5-star rating. We use these internally to be able to monitor depending on if it is CA or ASR and watch for trends and see how people are liking it. If they’re rating the calls high, if they rate them lower and like Eric, we also have users who provide us feedback and are very generous of their time. We use services and let us know where we can do better or what worked really well for them and I think as long as we continue to listening to consumers and driving developments in the direction that they want, the future looks very bright. So that is how we monitor our quality.
>> MATT MYRICK: The last question is for everybody. What does the future hold for ASR and captioning? Does anyone want to say anything to that?
>> ERIK SRAND: Cre, why don’t you go?
>> CRE ENGELKE: Again, we’ve been in the, of using ASRs in some capacity for a very long time. As Linda said, ASRs have been around longer, but it is only recently we have seen this big jump in technology. I remember actually 20 years ago when CAPTEL was introduced being asked how long are you still going to need a CA? Are you just trying to replace all interpreters, all the CAs? Are you getting rid of everybody? Just moving to a purely automated system? And I remember the answer came up with was something like L look, it is definitely ASRs aren’t good enough for all the time. We’re not just captioning calls. Our job is to caption everything all the time and to do the best possible job whenever possible. Best possible job at all times rather. So we said no. It is not ready yet. It’s going to be five years and five years later, someone asked the same question and it’s not ready. It will be five years. As Linda said, now it’s getting better and better and better the closer we get to now. It is always five years. Next year is five years and next year, five years and we’re starting to see tremendous advancements. Tremendous advancements in ASR. And we see a real — I don’t know. Step progress here and things have accelerated and you see big engines coming into the market. And like everybody on this panel and probably most of the people watching us, I have played with all of them and played with tons of conditions. We had tens of hours of test scripts we used to evaluate ASRs. I can tell you when they work great, they work really, really great. And when they don’t, well, that makes me say in the next five years, you know, we will see something really amazing.
>> MATT MYRICK: Thank you, Cre. Does anyone else want to add to that? The last question for the ASR captions?
>> CRISTINA DUARTE: Sure. I will go next. I hope the future holding more inclusivity for more speech patterns, people who have accepts and it becomes more encompassing so people can reliably know if they’re using ASR only no matter who they’re calling, they’ll be able to get caught service. I — quality service. I hope the accuracy of different languages improves. We are very lucky on staff to have individuals whose first language is Korean and individuals whose first language is Portuguese. So we’ve gotten to test a little when it comes to other languages and automatic speech recognition. On the language, sometimes it is very, very hit or miss. So further development as I mentioned in a previous question, different sectors of the community to benefit from this technology as well. I know we’re running out of time. So I will kick it over to Erik quickly.
>> ERIK SRAND: Linda, let me kick it to you first.
>> CRISTINA DUARTE: Oh, I’m sorry.
[Laughter]
>> LINDA KOZMA: It’s okay. It’s okay. I was looking at the Q&A and Dana has an interesting question that relates to this idea of other kinds of information that we can get from speech patterns and speech like somebody being sarcastic and certainly that’s not something that ASR does now. And maybe even humans don’t do particularly well or recognize sometimes, but that direction of being able to do other types of recognition, we have already talked about dogs barking or other aspects of sound that are informative during a conversation and even to what Dana is talking about with regard to maybe, you know, the intent of the speaker may be things that are in the future that we can look forward to. There’s always a question of how do you indicate that kind of speaker intent? For example, sarcasm. When you’re doing that transcription particularly at that rate, so it’s hard to know where we might go but that certain we can be one area that we’ll see breakthrough. I agree with Christina on this idea of being more and more inclusive of what is called edge cases and making them less so. So that more people are included in the ability to use and make use of ASR is important. My personal opinion is we will see things get better and personally I’m sort of excited to see that happen. I will just mention, Erik, if you don’t mind, the other question and answer which I think is a good point made about people who are combining both the auditory and the visual signal together. And that’s a really important point because people wait for those two signals differently and make use of captions differently depending on how much they rely on auditory information. I think it’s an area that’s really right for exploration and having a better understanding of how people with different amounts of auditory capability make use of captions is something I don’t see very much of in terms of exploration but absolutely is important. I think it might be a bit because some of the things that might help us understand that a little better like dragging might be easy to do, but it is definitely an area that deserves exploration. Erik, back to you. I know it’s 4 o’clock. Sorry.
>> ERIK SRAND: In 30 seconds, I want to echo something you were going to say. My vision or hope for the future is that wherever there is an auditory or voice it works for people who are hard of hearing or deaf-blind or whatever modality they need to communicate from. I was going to again say what you said which is some of that technology will be on the provider. Some of it will be worn on the user themselves, right? Whether it’s cochlear implants or some kind of fancy glasses that transcribe everything they say, it’s going to be a continuum and a meeting of what is offered by providers of these services and interfaces and what is local to the customer. I think that’s where I see the future.
>> MATT MYRICK: Okay. Wow. That’s all I can say. You guys have been a great panel. I want to thank you for the full hour. I’m sorry we don’t have any time for Q&A, but again, this has been very informative. Thank you, everybody. And you all have a great day.
>> CRISTINA DUARTE: Thank you, Matt.
>> LINDA KOZMA: Thank you, Matt. Bye-bye.
Accessible Tele-World
Carrie Lou Garberoglio, National Deaf Center
Transcript
>> CARRIE LOU GARBEROGLIO: Hi.
I want to make sure that interpreters are set and captioning is set.
OK.
It seems like we’re on.
So before I start I want to provide a visual description, in my 40s, white male, long brown hair, pink glasses, a black shirt, and tan cream-colored background.
I want to thank you all for taking the time to participate in this presentation, watching this video, and Q&A.
I’m a child of the Internet.
I grew up as a teenager at that time as AOL and AIM chat, chatrooms became ubiquitous.
People assume that we all have access to the Internet and virtual spaces in an equitable way but looking at data, trying to better understand how many people currently have access to the Internet and actually have the equipment to access the Internet like laptops, computers, phones, really the disparity is there, there’s a big gap of individuals in the United States who have — who lack access to the Internet because of equipment.
As we move forward, what the world looks like and what we need to hone in on in virtual workspaces and virtual learning opportunities and learning opportunities online as that increases we don’t want to leave people behind so I hope we can continue and keep in the back of your mind during this discussion in this three-day conversation about the future and who we want to include and make sure everyone is included in the future.
So if you have any questions, thoughts, ideas to discuss here, we’re open to that and answering that, answering your questions.
I see one person has a comment.
Many deaf people in their towns, in their cities, don’t have accessible resources like income in their rural communities to pay for the Internet.
And deaf people work, less or have fewer opportunities than hearing people so that would impact a deaf person.
So to pay for and pay for phone Internet services, many people are not able to do that.
That’s the reality in the deaf community around the United States.
So one question is if state agencies should start providing phones, tablets, phones, and tablets, I do see that happening more and more across the United States.
More states and agencies are providing access to phones and tablets.
To provide the variety of resources and support that an individual might need and to support college students who are deaf, we work with colleges and universities that usually offer information through the phone or tablets to the students.
Because learning management systems are part of the college experience.
And so we have to have a design that fits the student, whether it’s a phone, a tablet, or a computer.
Some are lower-cost devices compared to a computer.
So we need, as agencies, need to think about a way to provide access to most people, to the most people as much as possible.
So I think tablets might be a solution for that.
So the next question we have, next question is a state director.
And there’s a lot of data, but we do have personal stories out there, and, yes, they’re true.
Working at NDC, my own personal work, I love data, I’m very comfortable with statistics and numbers.
But people really learn the most and feel emotional changes based on personal stories.
There’s a website, we do have a lot of data and stats there, but we also have a lot of videos, of personal stories from deaf consumers and deaf people around the country.
I know we don’t know all deaf people’s experiences, not everyone’s the same.
Everyone has a different lived experience.
And that is true for everyone.
So that’s true for the hearing community and the Deaf community.
So not just — it’s — so one thing that doesn’t work for another and one thing might work for another person.
So there are videos and stories about others who don’t have access and we do try to collect that information.
Someone made a comment about some areas having — lacking Internet access.
And people, if you’re trying to call and connect with a hearing person, they disconnect because there’s freezing in the technology and there’s the lack of technology.
There are colleges and universities that provide online classes.
If the Internet is not great, accessibility doesn’t happen.
Meaning if it’s not great, it’s not going to be accessible.
The deaf student is going to miss out on everything that’s happening in the class and they don’t have the same experience as their hearing peers.
They might be able to — the hearing peers might be able to follow along and get enough, but signing and video do require a lot more bandwidth than just audio.
So that is what we’re seeing nationwide.
Those who live in rural areas struggle with that access, it’s very tough for them.
And how do we find them and how do we access high-speed Internet for training and professional development and educational opportunities and connect them with that virtual world, that is true, yeah.
So somebody suggested that we should encourage TV and have exposure with captions and accessibility for deaf people.
I agree, but we don’t have a budget for that at NDC.
Are there any more questions or any more thoughts?
No?
OK, all right.
Well, thank you for having me here.
And you can check our website out.
You can find our contact information there and e-mail us at any time.
We can give you information on where to find the data.
We’re definitely here for you.
Oh, I see one more question that just came up.
So the question was if we had data related to — statistics related to colleges or universities for resources and accessibility for disabled students and deaf students.
That’s a good question.
And we know that policies vary all across the United States.
Some colleges do provide wi-fi hot spots, laptop equipment for students in general.
I don’t know if they have a special policy in place for disabled or deaf students but that is something that we’re going to look more into for the fall.
All right.
And I’ll type the e-mail address for NDC on here so you can see that.
I’m still reading so just give me a moment.
So somebody had asked how can we get more people to be aware of VRS.
That’s a great question.
At the NDC we talk a lot about how we can change people’s attitudes in particular and that’s hard, that’s the hardest thing I think we have to do.
We can teach and explain and people do understand but when it comes to actually get the call, it doesn’t fit what they actually envision what a call should look like and they hang up and that comes up often.
In the educational system, we do see situations occur frequently where faculty will have a deaf student in the classroom and that deaf student has a delayed response or teachers are asking the classroom if there are any answers and it really diminishes the opportunities for the deaf student to get involved so the question is how can we change attitudes in thinking not only for deaf students but people with disabilities in general.
Sometimes we need time to process the information or think about it, we need more time for the captioner, so we need to change those attitudes and I think that’s the reason why we’re here today so we can discuss how we can educate, train, and change ourselves.
So I’ll wait a few more minutes and just see if any more questions arise.
He says he’s going to e-mail some information regarding the T-Mobile project 10 million, I look forward to learning more about that.
I think the more information that we can share with each other the more resources that we can share with each other the better it’s going to be for our community and for deaf people.
Someone had asked about smartphone usage, to use that to do speech recognition for the classroom.
I see that often with ASR because many deaf students, accessibility is not perfect in what the system has offered us.
Of course, they offer an interpreter or captions but sometimes the quality is just not so good or sometimes it doesn’t meet and provide the right information, and deaf people often fill in the gaps or we use our own equipment, our own phones sometimes or even ASR.
And we try to get by.
And I think that is a concerning trend of what it’s going to look like for the future.
Thinking that people just have good enough equipment, which is not really OK.
And I think that we can see that often in the National Deaf Center, do people use smartphones in the classroom or if they’re going to use ASR in the classroom, I think then they’re fine and I think we have to think fine does not mean optimal accessibility so there are a lot of challenging questions coming up now as technology is advancing so much faster than policies are and we have to make sure that the technology does advance at the same rate as those policies are, as we move forward.
You asked really wonderful questions, this is a wonderful group and I’m very excited to join this conversation, I look forward to continuing this.
Have a great afternoon and I’ll see you at the next breakout room, there are going to be three breakout rooms.
tele:CONF (breakout)
Roberto Cabrera, Sam Sepah, Christian Vogler, and Tina Childress
Transcript
>> TINA CHILDRESS: I’ll wait for the interpreter.
Ready?
I’ll start again.
Hello, and welcome to this breakout session shaping an accessible teleworld, teleconferencing.
My name is Tina Childress and I will be your moderator today.
I am a brown-skinned woman with black and white shoulder-length hair wearing colorful half-rimmed glasses wan a card I began white top.
Smiling at the background with a TDI biennial conference in the corner it says TDI 24th biennial conference in blue text reset and reconnect in light yellow text and #TDI conf2021 in white text.
I feel so privileged to be one of the newer members at large at TDI and have learned so much being a part of this organization.
I am an audiologist by trade and also a late-deafened adult who uses bilateral cochlear implants using technology for accessibility is my jam and I enjoy learning, teaching, and creating resources about it.
I’m choosing to speak for this panel since I’m going to be navigating different windows on my computer, but I’m also fluent in ASL.
I am so honored to welcome our esteemed panelists, Roberto Cabrera, Sam Sepah, and Christian Vogler.
I will be calling on them individually to introduce themselves and give a visual description.
Throughout this panel, I will call on them when they raise their hands to answer questions and they will pause before speaking or signing so that people are aware of where to look.
So without further ado, let’s start with Roberto Cabrera.
Roberto, you’re on.
>> ROBERTO CABRERA: Hello, everyone, this is Roberto.
I am so thrilled to be with all of you today.
I see the — I know the interpreter and so that makes me even happier.
So I’ll describe myself first.
I’m a light brown-skinned Dominican man.
I’m sitting here with a black background.
I have a polo black shirt as well.
Let’s see.
And, yes, I’m happy to be here.
>> TINA CHILDRESS: Thanks, Roberto.
Next, we have Sam Sepah.
Sam?
>> SAM SEPAH: Hello, there.
I am Sam Sepah.
And I work at Google as an access research lead.
I have black hair.
I wear glasses and I’m wearing a Google shirt, Google Accessibility shirt.
You can see the logo of Google.
And I have a light background with a painting of a map in the back and a desk with a lamp in the background as well and I am honored to be invited here to share during this exciting time where our technology is changing and learning about each other so thank you for having me here today.
>> TINA CHILDRESS: Sam, I want to know where to buy that shirt because I want to buy one.
>> SAM SEPAH: Well, since I work there, I get free stuff.
It’s one of my perks.
>> TINA CHILDRESS: That’s awesome.
All right.
And last but not least, Christian Vogler.
>> CHRISTIAN VOGLER: Hello, everyone.
Before I introduce myself, I just wanted to warn you that someone posted a comment saying the male interpreter’s voice is pretty soft so if you wouldn’t mind making sure you speak up a little bit or change your microphone.
Saying it’s better now, perfect, thank you.
All right.
So I am a white male.
With white brown hair.
With some white in there that is starting to pop up.
I have a dark blue button-up shirt on and a dark blue background behind me as well.
I handle the technology accessibility program at Gallaudet University in the research group focusing on communication technology for deaf and hard of hearing.
And I’m very familiar with the topics and videoconferencing and thank you for inviting me here today.
>> TINA CHILDRESS: So the goals of this session include providing consumers with a framework to propose accessibility standards and rules across all teleconferencing services as well as understanding current technology and policy limitations.
This topic can go in many directions so please start thinking about questions that you might have.
Please ask them in the Q&A box, but I’ll also be monitoring the chat box so let’s begin.
So the first question that I have, kind of a warm-up before we start taking questions from the audience: Why are remote working and hybrid setups not accessible enough?
What is needed to ensure they are fully accessible for all?
I’m going to call on Roberto.
What is your name sign is what?
>> ROBERTO CABRERA: OK, absolutely, yes, Roberto here.
I forgot to add my — some information that’s really important about me.
So my job, what I do for a living.
To explain a little bit more about myself.
So let me do that as well.
I work here in Colorado Commission for the Deaf and Hard of Hearing, deaf and blind.
And my name sign is here, it’s right on your arm.
Right now I’m using ASL but note that usually for communication I use pro tactile ASL.
And so when — depending, you know, on whether I’m home and I’m responding to something virtually or connecting virtually that changes how I use my communication style.
So to answer your question about hybrid work, in general, working virtually has opened a lot of doors for many people, now, and as far as scheduling is concerned, it might be able to participate in events, some at home and some in the office but it depends on the situation.
So at my job, I’m very blessed to have a flexible team, and they prioritize accessibility in everything that I do and that’s really, really key.
I’m very grateful to have a supportive team.
And they have a policy at their work where accessibility is very important.
They may have telework, right, where you have one day at home, four days in the office.
So there is that flexibility there.
And I know that depends on the area, the state, the region, wherever you work.
And how much they’re willing to accommodate the employees that work there.
And it really does depend on your supervisor, right, the people that you have on your team and the people that are supervising you.
And, you know, if you are serving the public and you want that customer service to be ideal then also that’s incredibly important.
Now, we were just talking about the deafblind community, whew, touch is the way in which we communicate.
Technology has its faults, right?
Anything could disconnect at any time.
If you are one-on-one with a person, if you’re communicating through touch, that access is going to be there, and unfortunately over tech that doesn’t happen.
So now if you are relying on the Internet, does that — you know, it’s a hot topic right now that everybody’s talking about.
Is it really inclusive — are we talking about inclusion, are we really talking about inclusion, or what we’d like to be inclusion, right?
An ideal.
>> TINA CHILDRESS: Sam, thank you.
>> SAM SEPAH: This is Sam.
So mostly working at Google I would assume that it’s similar to other high-tech companies or companies that are involved with accessibility, product design, are pretty similar.
Often we ask accessibility is one of two paths.
The first is modifying the space itself which means it’s equivalent for people to have equal access in that space or if the technology would elevate, and you don’t have to modify the area, the space.
So either the space is being elevated or the technology is elevated.
You know, in a perfect world they’d both be elevated, but that, unfortunately, is not all — circumstances to where they both can be elevated so very rarely we can see it both being elevated.
So whether it be technology or an app being elevated, that space is being changed, whether, you know, you’re having to sit closer, modify the design so you don’t need to be supported by technology, the question is, is captioning — can that follow you everywhere?
Probably not.
This means you’re going to need the space to provide captioning and not the actual phone or technology.
And that often is a question of where is the accessibility needed to be done and where it’s made by us or them, you know, holding them accountable, like a movie theatre, that would be them and not us, the mobile person would not be providing that.
So it’s the discussion of whether accessibility is meant to be modified and I’ll leave that up to you to think about.
>> TINA CHILDRESS: Christian.
>> CHRISTIAN VOGLER: Christian here.
And I agree with Sam’s point.
And I really wanted to emphasize how important that is for that point of view of this videoconferencing and how to provide accessibility.
You know, 20% is technology, in my point of view, and 80% is concentrated on how people behave.
And so really videoconferencing is official, you know, the technology added support for language and hard of hearing, you can enhance that, but communication dynamics on how we communicate back and forth doesn’t match hard-of-hearing people at all.
So you have to remind people to take the best practice on how to turn-take on who the speaker is so you can visually go back and forth.
And all of that is required to have best practices for deaf and hard-of-hearing people.
>> TINA CHILDRESS: Thank you very much for your comments.
And I think that connected — oh, Roberto, did you have another comment?
>> ROBERTO CABRERA: I did.
I wanted to follow up on what Christian said.
I liked the comment.
I noticed when I look at, let’s say, the work that I do, right, and do — the outreach that we do, we have a group, a team there that does outreach and develops content.
We decide among the group of us what tips, what’s the best way to be accessible and we design that dynamic and we do that all online.
So, now, OK, I’m looking at the link now, I’ll share it after.
So now, to emphasize that, similar to what Sam said, OK, can technology be that — can it be the same in the office as it is at home right, can we have that same kind of accessibility, like what we have in the office?
And often we end up trying to bring what we see in the office to home.
And if you have a deaf — deafblind person, a hard-of-hearing person, they may try to make that fit to their computer, whatever setup they have at home, to have it ready so that they can have that same kind of accessibility, but it’s time-consuming, right?
They may have to fill out a specific form so that they can qualify so they can get equipment at home so that that can be provided to them, that’s one thing, to be as cost-effective as possible, they can go through vocational rehab in order to get some kind of services or equipment, what’s expected that the government provides for disabled folks.
That’s one thing that fills the gap in some way, right, for people who might have a cost barrier.
>> TINA CHILDRESS: OK.
This is Tina.
I’m going to go ahead and go to the Q&A because there are a couple of questions that I think relate to what we are just talking about right now.
So the first question says: Some of us deaf-disabled consumers are not able to use smartphones because of size.
What can we do?
And so wondering if any of you have any solutions or comments about that.
Go ahead, Sam.
>> SAM SEPAH: Sam here.
OK.
So I’ve listened to different complaints and feedback, ideas presented for a while now, working in a high-tech industry for the last 15 years.
I’ve had a lot of that thrown on me.
And that’s OK, that’s all right.
So one thing I have noticed is that oftentimes people like you saying that the captions are too small or anything like that’s not accessible impacting are — the user’s experience on a mobile device or laptop or whatever device you’re choosing to use, oftentimes will go up to the product manager or the engineering lead and inform that we have this group of people who are unsatisfied with that and 99% of the time they’re surprised, they’ve never heard any comments or feedback through the community or through bug reports that it doesn’t work.
So they don’t receive that information, there’s no, you know, public channel that you can go through, you know, you could go through an app or something to let them know that this design doesn’t work well for us.
So going back, some people like me are — who are privileged enough to direct you into the right way of where you can submit this information, you can do this not only with Google but with any other company that you’ve noticed or have a suggestion for improvements to make through a bug to get, or to report a call to customer support or post on a forum.
Please do let them know that the public is tracking this and that if the number of these increases, there are more chances that it will be improved.
So please, as a group, you know, make it a point to be able to communicate loudly and let them know through the public process.
And that’s my recommendation.
>> TINA CHILDRESS: Thank you for those comments.
Christian, did you want to answer that question next?
>> CHRISTIAN VOGLER: Christian here.
I just wanted to add something and that’s a great example.
It’s extremely valuable too, you know, practice including that in the design.
To be inclusive, designs, uh-huh.
This means being inclusive in all kinds of diversities and into the design — in the product from the beginning.
So that you can include everyone from the beginning.
And the engineers won’t have as many surprises then.
>> TINA CHILDRESS: Back to Sam.
>> SAM SEPAH: Sam here.
Yes, I’d like to respond to Chris.
We agree.
Engineers, not just myself, I work with a lot of small companies and big companies alike, like with IBM, I’ve worked with, for example.
We include that in the design.
And our goal is to always and forever be, to be inclusive.
And oftentimes we have to prove the value, which is why, if the deaf community petitions, you know, and has over 2,000 people asking to fix something, the engineers will fix it and then the next project they’ll keep that in mind that there’s a large group of people that are passionate about this.
If they don’t hear anything, they’re going to assume that everything is OK which means that the commitment to that expectation is lowered.
So we need to keep the engineers accountable that if something isn’t perfect you have to remind them of that so that they can include that in the next, in future designs for the — for tech companies as a whole.
>> TINA CHILDRESS: Back to Roberto.
>> ROBERTO CABRERA: Just briefly I want to add — thank you for those two comments.
I think that I agree with them 100%.
Inclusive design, we have to think, again, when it — when they’re in the process of designing, it goes two ways.
So if you think about maybe you — that’s not particularly accessible for one group of people, I’ll give you an example, I’ll mention an example.
So, like, a survey on a website.
You might have a survey that is not accessible or one that is, right, one that is and one that isn’t.
So do you use the accessible one or do you assume — the expectation should be setting up from the beginning that you start with accessibility but assumptions harm those decisions that are made, it’s got to work in both ways.
So if we are going to try to get feedback from people and it is not accessible to begin with then how can they give us that feedback.
So it’s layered, it’s important to keep that always in mind in our data.
>> TINA CHILDRESS: Thank you.
You know, I know that we talk a lot about people that are signers.
Obviously, we’re talking also about the deafblind community.
I saw a comment from Michelle Michaels, one of the barriers people who like to lip-read are experiencing on live and taped — or videoconferences and webinars is the inability to lip-read the person speaking because that person is not always on camera, whether it be an ASL interpreter voicing or a recorded video with a PowerPoint and a disembodied voice.
I think we see that, that people are making assumptions and not really thinking, you know, about all of the different groups that might be viewing that video.
So, you know, like, when I talk to consumers, I talk about having an elevator speech.
Like saying what do I need as a late-deafened person who can hear some but who can also use an ASL interpreter, I have to voice what works for me so that, you know, I know that it’s sometimes hard for everybody to think about all of the possible solutions, right, because there are so many different groups, but if you know that you are going to participate in something and you know that you need a specific accommodation, kind of talking about what you’ve all been talking about with providing feedback.
You know, I think that’s really, really crucial.
Do you have any comments on that?
Like, for example, with Roberto, I haven’t worked with very many deafblind consumers before.
So I, you know, contacted him offline to say, hey, what do you need so that when you and I have a conversation, that we’ll make it more accessible, right?
And so, you know, some of it is they don’t know what they don’t know.
You know, the people that are putting on these different events and creating these things online.
And so I think, you know, part of the burden is that sometimes we have to educate others.
So Christian?
>> CHRISTIAN VOGLER: Hi, Christian here.
Yeah, I agree with you as well.
I want to add that to the area where we become very, very frustrated with technology, to be honest, I feel that deaf and hard-of-hearing people and blind people need to have full control over how you can rearrange the screen, for example, if you’re on a video call and enlarging the interpreter and moving them around and maybe you need captions and enlarging that around, too, looking at the slides, you need to have that moved around and enlarged or maybe if you have a bunch of cameras, videos onto the screen and it’s distracting and you can eliminate the ones that you don’t need so you can focus on the ones you do need.
So all of those platforms and technology aren’t fully inclusive and supportive yet.
Some platforms have made a step in being more supportive but oftentimes what happens, it becomes just so overwhelming and technology becomes more and more complicated and people spend so much time figuring out how to use the technology and setting it up and everything and it’s hard to follow along in the meetings anymore.
And it’s still, you know, a big gap in the balance between customization and making it easy to use.
And that problem still is yet to be resolved.
>> TINA CHILDRESS: And this is Tina speaking.
And kind of talking to that, you know, I’m looking at Ken’s comment that captions and the CC should always be visible to the user and all videoconferencing platforms as part of the product design.
And just to summarize also, the fact that on many platforms, that we don’t have that control.
You know, that’s something that is an issue.
But I agree with Christian, you know, I’ve been working with him kind of informally and sometimes formally on, you know, providing resources to help the consumers understand technology.
So it’s not just, you know, hearing people and, you know, the general community not knowing what they don’t know, but we, as people with disabilities, sometimes we don’t know what we don’t know.
Or we don’t know what are the options are available.
And so that’s, you know, knowledge is power, and that’s something that we definitely need to address.
All right.
I’m looking through the question-and-answer right now ’cause I thought it would be better to try to connect some of these questions with what we’re talking about before moving onto the next question.
And so to Debbie Hagner’s point, you know, universal design, when we turn on captions, of course, we know many people other than people that are known to be deaf or hard-of-hearing can benefit from captions.
You know, they don’t know they have hearing loss or they realize, you know, they want to turn off the sound on their video that they’re watching.
If they’re second language learners, you know, many, many people benefit from that.
Kind of like how texting and e-mail, you know, have helped many people as well.
Gideon has a question directed to Sam.
It says: Have you ever seen or received an ASL message among those suggestions for improving a product or application?
How available is the opportunity to submit a video report?
It can be difficult for a deaf person to explain what they see or experience in English.
>> SAM SEPAH: Sam here.
I’d like to answer that.
For Gideon.
That’s a great question.
And it’s not a new topic we’ve discussed.
So long story short, first of all, there’s not enough people to send feedback from deaf and deaf — hard of hearing or deaf and blind, in that community, there are not enough people sending that in so we don’t ever receive those cases.
Therefore, there’s nothing.
And if it does happen and we do find it, it just hasn’t been found or identified or even attempted.
And then secondly, I can understand why there’s a — no barrier to access if I’m on video, if I’m signing, oftentimes if you want to express yourself in signing in a video, then take that opportunity to film yourself and upload that, like on Google Drive, and send it on there and say, hey, you know, I’m using sign language.
Please view the video in the link and have an attachment file.
You know, maybe a survey won’t include that.
Please include that attachment into the link and then you can do your part and, you know, meeting us halfway.
And I’ll type out something and if I can’t then I’ll sign it.
I’m willing to do that and give you feedback to have you watch, you know, and that’s where the partnership process comes into play, and the deaf community hasn’t sent us any links to their videos or anything.
So back to my point, we do need to figure out how to make some noise and send some feedback and voice our ideas so we can do anything, we can be creative, maybe have someone help you with typing it out, and then attach a video to go along with it.
And then secondly I also want to go back to talking about captioning.
It’s interesting, in YouTube, our captioning team has noticed that most people who click and use the CC feature in YouTube’s platform are not actually deaf and hard-of-hearing people.
Most of them are just normal people out in the world.
Often people will click it and — or deaf people think it’s for my community and it’s, like, no, we’ve looked at the statistics and everyone in the world uses it.
So to reference what Tina was saying, you know, maybe a mother and a baby, you know, are trying to watch a video and the baby’s crying and it’s too loud so she’ll be watching the video with closed captioning.
So there’s a lot of reasons for everyone to use captioning.
It’s actually a revolutionary, empowering tool right now that’s not only making it accessible but universally used for a design.
And so I just wanted to throw those two points out there.
>> TINA CHILDRESS: OK.
So I’m going to go on to the next question I have on my list.
And this is somewhat related to Eileen’s question in the Q&A box.
So the question is: Who should be responsible for ensuring that videoconferencing is accessible?
Is it the videoconference platforms?
The users?
Or some combination of both?
And so Eileen’s comment about leaving captions on all the time for all things, I think can be a little bit sticky, and I don’t know if there’s one right or wrong answer to that.
But do you have any opinions on any of that?
Christian?
>> CHRISTIAN VOGLER: Yes, Christian here.
Yes.
So first, I want to start off by explaining that there is a law applicable to videoconferencing, that level.
It’s the 21st — the videoconference — CVA, CCVA.
And it’s there called the elevated communication service.
Advanced communication service.
So ECS is the abbreviation for that.
And the FCC has the authority to establish control over that piece.
And unfortunately, there is some ambiguity in the law specifically saying accommodations and operations for videoconferencing.
What that means for videoconferencing at that level is that there are issues with the operations and how it runs.
It’s inseparable.
OK.
So that should have never been absolved.
And we have to work together to publish an article at the University of Colorado for law and review to focus on that topic, and I’ll send a link in the chatbox soon so that we can all take a look at that.
But the bottom line here is it’s very complicated.
And that there’s — we need to put some responsibility on the video platform people and have some continuity on what it looks like and clear things up.
And then also to establish where that boundary line is with the platform and technology and the person who is setting up the conference, who is hosting the conference.
There’s a blurred line there too.
>> TINA CHILDRESS: So Roberto.
>> ROBERTO CABRERA: Roberto here.
So to piggyback on what Christian is saying, often we forget what is the purpose of the videoconference happening.
So what is the goal?
Who is the audience, right?
Do we think about who’s in the audience?
What is the goal?
Is it to target the deafblind community or to invite the deafblind community, right, is that the goal?
If the goal would be — if it’s a specifically larger audience, then that’s who needs to be accommodated so we have to think about that.
Who is in the audience?
What is the intention of that particular workshop, that videoconferencing, what it’s being used for?
And then thinking about the host.
The host often forgets that they can put in some way for the audience to let them know what their needs are, right?
They can put an e-mail address or whatever.
They can list accommodations, you know, as an option.
And open themselves up so that the audience will tell them when they need something.
So that — if we can improve that communication between the host and that connection to the audience, then I think those things can improve.
>> TINA CHILDRESS: So during this pandemic year, I have been involved with several theatre groups.
That’s, I guess, I would call my hobby is theatre and attending live events and music and things like that.
And what’s been awesome to see is in the theatre and live events community, I mean, sometimes they are the most, like, desperate for money, but I see them go so way out of their way to make sure that their performances are accessible and inclusive and that they have diversity, and it’s been amazing to see that happen.
I have not seen that so much always in the business community.
Right?
So as consumers, I think that that’s what we need to do.
We need to voice that we need that specific accommodation.
So exactly what you guys have all been saying.
So thank you for those comments.
OK.
So the next question is what are the current accessibility limitations with teleconferencing that you see?
Also specifically for the deafblind community.
Sure.
I will ask the question again, Roberto.
So the question is: What are the current accessibility limitations with teleconferencing, including for the deafblind community?
>> ROBERTO CABRERA: Sure.
So there are so many.
But I will try and focus on some of the major ones.
So now the first question is, is there interpreting service ready, right, is that something that’s thought of immediately, or have they even considered that?
So — OK, has it.
>> INTERPRETER: I’m just getting an interpreter clarification.
>> ROBERTO CABRERA: Does that deafblind person have access to an interpreter that can communicate with them in person, right, somebody who can be a communication facilitator to support them in the process of accessing this teleconferencing video.
That’s one thing that people could consider as an option and is often — often they are not available.
Depending on where that person lives.
They may not live close to a city where there’s a large supply of interpreters that can come and do something like that.
But it is best practice.
And if that was a policy that they followed, then that would be fantastic that they could provide that as an option, that would be most accessible.
Now, when you’re setting up that conference and you’re getting the word out about it, how much time have you used to prepare for that conference, right?
Is it happening next week?
Is it happening in a month?
Because it’s going to take time.
We need to share with the community what the topic is about.
Many times the audience might want access to the information beforehand.
And so if everything is tried to put on — if you try to put things on very quickly then access is not going to happen.
So the time needs to be taken into consideration.
So that part is really frustrating for the deafblind community.
And as far as limits are concerned, I mean, what technology is available to them.
If you’ve got a state program that provides equipment like I was mentioned before but then there are criteria, right, that deafblind person might not fulfill those cite so that person who needs that equipment might not be labeled as somebody who qualifies for that program.
And you might be looking only at what a sighted person might be, you know, needing and accommodations for a sighted person, not looking necessarily at a deafblind person.
And so those are all things that need to be taken into consideration.
They also might be using old technology, right, they might not be using what — they may have been — there might be equipment that has been accessible to them, but that isn’t what’s most needed, you know, at that moment or it may not be compatible.
And so that will have an impact on whether or not they can participate or not.
There might not be enough technology available, right, there might be equipment for a certain amount of people but not everyone.
And — and another thing that we have to keep in mind is the identity of being deafblind is a spectrum.
Not all deafblind people communicate in the same way and have that same need.
And so many of those decisions end up being made by sighted people who don’t know these things about the wide spectrum of how the identity is very varied.
>> TINA CHILDRESS: Christian or Sam, do you have any comments on that or…
OK, Christian.
>> CHRISTIAN VOGLER: Yes, Christian here.
Thank you for all that information regarding deafblind accessibility issues.
It’s very valuable.
And often what happens for deafblind, you know, it’s important to focus on those things and it’s often pushed aside but I would like to add some challenges for deaf and hard-of-hearing people as well who read lips and people who sign.
And from my experience, for example, again, the video relay conferencing, how there’s a whole group of people and, you know, someone signing and everyone knows sign language, everything goes smoothly, it goes well.
But if everyone is hearing with a deaf signer and in that conference, it’s going to be tough.
Because the hearing people won’t know sign language and there’s just not a lot of good options to — and, of course, you’re going to pull in an interpreter, yes, but if you can’t call one and it’s unofficially, you know, called last-minute you’re not going to have time to call for an interpreter.
And in that situation, if you’re hard of hearing, you can read lips and you feel comfortable speaking for yourself you have a great advantage there because technology is able to support auto-captioning, to an extent, for a small group of people.
I’d say it would be good for that situation.
You know, but if you can’t, then you don’t have language there.
And how are we going to fix that situation?
So I just wanted to throw out that challenge to technology and tech industries on how to support better language.
>> TINA CHILDRESS: Sam.
>> SAM SEPAH: Sam here.
I have seen not just from Google but also other startup companies and big corporations have set up some kind of accessibility, not just for deaf and hard of hearing but different accessibility features and tools that are able to plug in to — as an extension and 90% of them never are promoted or bothered to be modified or improved for the community, 90% of the time they just go away.
For example, a while ago, Google had come out with this video called “Hangout.”
That’s the old version.
Now it’s called “Meet.”
It used to be called hangout.
So I would say that was about seven or eight years ago.
And the engineers built it with a special feature where deaf people could see the interpreter on the screen during a meeting but the hearing person wouldn’t see them and the best part is that the interpreter would overlap on the deaf person and that was a while ago.
And probably two — one or two years ago, after it had come in articles and newspapers and it was advertised people weren’t using it enough and so whenever the company had moved on to a new platform version, Meet took over that feature.
It wasn’t considered beneficial.
And they were kicked out.
And so there was no data to prove that people were using it enough to support keeping it and so my input for everyone is that if you want these features, then use it, please.
And if you don’t use it, at least share and tell people about this cool feature so that maybe others will use it.
And if you have any complaints or concerns but don’t use it and it just becomes, you know, a moot point because engineers’ times are very expensive.
They’re one of the most expensive people in the world.
But I would say we need to use it.
It’s just like if you don’t eat your vegetables, you don’t get to get your ice cream.
It’s kind of the analogy I want to make here.
>> CHRISTIAN VOGLER: I want to jump in, and Sam, you’re right on the nose.
I remember Google hangout, I remember, it was really good, for college, it was really beneficial, super cool.
I think for NAD in 2012’s conference, Google set up a demonstration on how to use a VRS interpreter for that, for the conference and they showed the technology and how it worked.
But it was never released.
So I suspect that the FCC and their regulations, did have something to do with that and then we also need to get onto the FCC and how they need to change and improve and elevate their rules as well on how to experiment VRS and VRI in today’s platforms, you know, FCC is stuck on the phone system and that’s just not enough.
VRS needs to change to support videoconferencing at that level.
>> TINA CHILDRESS: So I’m going to go a little bit offtopic to a question that I did not give you guys ahead of time so put on your thinking hats.
So, you know, throughout this pandemic, using things like ASR captioning as well as live captioning, many times deaf and hard-of-hearing consumers and deafblind consumers have complained about the delay.
Does anyone here want to talk about why does a delay happens?
Because I think that sometimes consumers don’t understand why that happens.
And I would just like to clarify it for them because I know some things are not in our control.
They’re not even in control of the platform.
So is anyone able to answer that question, why is there a delay in captioning?
>> CHRISTIAN VOGLER: Christian here.
I guess everyone is looking at me.
>> TINA CHILDRESS: I’m looking at all of you but I’ll start with Roberto, he’s braver than you, Christian.
(chuckle).
Go ahead, Roberto.
>> ROBERTO CABRERA: OK, so I’ll keep my comments brief.
I’ve often discussed this topic about ASR.
Automatic speech recognition for those who may not know.
So ASR, when you think about the different platforms, there are some that are using it better than others and then there are some that are just pitiful so you have to identify where it is being used and the way it is actually effective.
And, again, what is the audience, the goal of the audience, or the audience needs.
Is it something that’s going to be available last-minute, then it has to be ASR?
You really have to think if you’re going to plan it in advance, then, of course, you can look at it differently.
If you planned it two weeks, three weeks, a month before, you get that live captioner, the person who is trained in that and knows exactly what to do and you get the ASL interpreter.
And so if you have the blessing of time, then you can design what you — how you want it to look.
So for me, as a deafblind person, and in the community, really live transcripts and transcription are just beautiful for me because I can follow along with the crypt as well as with the interpreter.
I can edit that full transcript on the side, I can edit the color, the background, the font, I can edit the size, I can also change the, you know, the pace of it is also way more manageable.
A lot of times when I’m looking at the captioning at the bottom of the screen it’s moving so quickly I miss what’s going on.
So when I have that option of the transcription I can take my time and read through it and follow, the pace works much better with me.
So I notice that the captioning on the bottom is, it’s stressful, it actually impacts my mental health, when I’m trying to access things and I’m trying to catch up with the captioning on the bottom, if you have that full transcript, it really makes a huge difference for me so I imagine it would be the same for other deafblind folks.
That’s just my little two cents.
>> TINA CHILDRESS: Go ahead, Christian.
>> CHRISTIAN VOGLER: OK, thank you.
Christian here.
So more about the technology issues and with captioning and what causes that delay, you know, so suppose, you know, here in the Zoom meeting right now, with the four of us, how Tina’s speaking and her computer is sending audio up to the cloud in Zoom and then the captions are being pulled down from that audio.
And there’s going to be some delay happening there.
And then it also depends on the Internet connection from the person.
And sometimes Zoom will have an audio delay which matches up with the video better, actually, with the captioning.
And then the captioner hears that and has to create the captions.
And then sends it back to both Zoom and the streaming site, the page of where it’s at.
I don’t know exactly how it works out and it integrates but I’ve noticed that having both Zoom and that page open at the same time adds more of a delay.
I don’t know who’s responsible for that at Zoom or at the captioner’s Web page but I think they need to be put together and figure it out and manage that delay to be less of an impact compared with automated speech recognition, ASR, and Zoom.
That’s a benefit, you know, I’m sure the audio is being sent to the cloud up there, again, and then Zoom’s able to operate more efficiently doing that.
And I’m sure the lag time is less there.
So that means if the captioning is integrated and embedded at the platform level, there will be less of a delay.
But if you involve a third party, then the captions going to be more of a delay and have a longer lag time.
>> TINA CHILDRESS: Sam.
>> SAM SEPAH: Sam here.
So, Chris, you hit it on the head there.
The more people that are involved, the more process there is, no matter how fancy the technology is, the more you add to it the more of a delay there’s going to be.
And trying to minimize it there’s always going to be something else that gets in the way, you know, whether it be your personal laptop that’s slower, your phone that’s smaller than consumes more data.
There’s always some kind of intervention impeding that goal.
And so that’s why we at Google try to use live transcribe and it doesn’t rely on going up to the cloud and back.
We want it to stay localized.
In that location.
And that will improve the delay of speed.
But, again, remember, this is still a machine which means it’s going to need time to process.
Regardless of how small the delay is, it’s never going to be at a zero.
You know, if it stores each and every sound and matches that word, that’s millions of files within your phone.
And you don’t want a big old phone book-sized phone in your pocket, you want a thin, light phone so you’re going to have to keep in mind that the process is — has a tolerance level that we can forgive, and it will not replace your experience being, you know — trying to be the best and unfortunately, you do have to pay a small price for that lag time.
>> TINA CHILDRESS: Yeah.
And this is Tina.
If you’ve been watching any of the other sessions, there’s been a lot of discussion about people choosing ASR over live captioning for the reasons that they do.
And it really is you’re going to weigh.
You’re either going to have a live captioner that is going to be more accurate but is going to have more of a delay or you’re going to have ASR that’s going to be more synchronous but is prone to more errors.
And so that’s something I think that there’s — there’s no solution for that.
But thank you for explaining that.
It is, it’s all about integrating all of these technologies.
And when you introduce a human person into this, you know, they have to listen, process, physically type what the, you know, what the captions are, of course there’s going to be more of a delay versus something like live transcribe like Sam was talking about so thank you very much for that.
I did want to direct you to the comments from Dana because I think that’s something that we haven’t talked about.
Maybe this might be our last topic.
And so it’s talking about acquired hearing loss and acquired cognitive otolaryngological issues and talking about user options being more user-friendly so, again, you can be really complicated with more options or you can have fewer options but then you might have less flexibility.
So do any of you have any comments about that?
Christian?
>> CHRISTIAN VOGLER: Christian here.
I don’t want to take over the whole meeting, I don’t mean to, but anyway, I agree with Dana’s comment.
You mentioned, you know, technology and how complex it is.
It’s amazing.
I mean, it’s terrible but it’s also amazing.
So I think we need to take a closer look, you know, the University of Maryland center — Trace Center, they have a project where they’re trying to figure out how to create equipment and computers and mobile devices and tablets match what you need, accommodate to what you need, to give me exactly what my accessibility will need.
Instead of us having to change everything in the technology to match what I need, it matches what we need.
And it’s called Morphic.
And it’s not ready for production.
But they’re making good progress and it’s very interesting, that field of research and that area.
But I fully agree with you and that philosophy.
Trying to figure out how to use and edit everything.
No, the computer needs to be compatible with what I need.
>> TINA CHILDRESS: Sam.
>> SAM SEPAH: Yes, Sam here.
There’s a lot of argument about that regarding where the macros are huge, it’s private data, it’s the user’s right, you know, and with computers adapting to what you need but you’re sacrificing your privacy and some people are comfortable with that and some people aren’t which means, let’s say if I wanted to adapt to what I need and it sees everything in my house, like right now, you know like it knows what color my skin is, it knows what my status — my socioeconomic status, all of those things and it’s going to know all of these questions and then also your private data, how much do you want them to know about you.
It’s a balance of, you know, your privacy right and how much you want it to adapt to what you need.
Oftentimes people are willing to sacrifice that, in a large society sometimes they resist that idea and so that’s where I’m going to leave that.
>> TINA CHILDRESS: Roberto.
>> ROBERTO CABRERA: Roberto here there’s so much we can improve on, right, there’s a lot.
One of the things that would be great that I’ve been always looking forward to and hoping for and pushing for is, you know, I keep moving — doing everything I can to push that forward, that inclusivity, the idea of inclusivity, pushing and challenging as much as possible, just the concept of inclusivity really thinking about it from beginning to end, right?
Thinking about the whole process and seeing it really, I just, you know — and not putting inclusivity as the last thing as an add-on after everything else, right, like really looking at it from the very beginning, that’s my dream, that’s what I hope for the future, that’s what I want to see from beginning to end.
>> TINA CHILDRESS: And I think, you know, from all of these comments and the comments from the panel, it’s about options, right?
There is no one solution that can fit everybody’s needs.
You know, we all have different preferences for accessibility, you know, technology and what we feel comfortable with that, you know, satisfying everybody with one tool, we know it’s not out there.
It’s just like saying that everybody should wear the same clothes all the time and we know that that’s not possible either.
So we have about one minute left.
Do any of you have any closing comments or last thoughts that you absolutely have to share with the audience?
OK.
Yes, Roberto.
>> ROBERTO CABRERA: Technology is our enemy but can also be our best friend, right, it’s both of those things.
It’s a complex relationship.
And so we can, through communication, we can (?) inclusivity, and having that frame is the way that we can work together.
I think that’s super important.
>> TINA CHILDRESS: Sam.
>> SAM SEPAH: Sam here.
First of all, all of the comments that you’ve put in the chat are amazing.
Obviously, in the chatbox all these comments that I’ve seen you’ve put in, I can tell you are passionate and want to be involved in the process and that energy and passion need to keep going.
With all the tech centers, not just with Google but Amazon, Microsoft, and even small companies that are eager to work with you all.
I know it can be an exhausting process of getting onto them but remember, you’re doing this not just for yourself but for the betterment of everyone and the next generation too.
And then secondly, I’ve seen some people out there making comments saying why don’t you let me know.
So if you learn something new that’s coming out or a new announcement, please share that.
There’s a lot of social media that’s really powerful or, you know, some people might not have accessibility to get timely information, so share that.
Sharing information is one of the most powerful and beautiful gifts that you all have right now more than ever in human history, when you get that information, share it in our community.
Let everyone know, hey, this new captioning tool’s coming out or, hey, there’s a new feature coming out from this company or even just like teasers coming out soon, go ahead and share that, that’s the most powerful tool and it’s beautiful.
That’s how we can keep things alive and pursue that equal accessibility, so please continue doing that.
>> TINA CHILDRESS: OK.
And Christian, do you want to say one last comment?
I don’t know if you’re raising your hand or not.
OK.
>> CHRISTIAN VOGLER: Yes, Christian here, I am, I was raising my hand.
I love technology.
You know, I can tell people are passionate and I know it’s very important that we deaf and hard-of-hearing people are very diverse and we come from many different points of view.
>> TINA CHILDRESS: Absolutely.
Thank you so much, panelists, for your participation.
Thank you to the audience for joining us today and really kind of, like, stretching us out and making us think about different things and I hope you enjoy the rest of the conference.
Thank you so much.
Bye, everybody. Thank you.
tele: EDU (breakout)
Sheryl Burgsthaler, Raja Kushalnagar, Chris Sano, and Mei Kennedy
Transcript
>> MEI KENNEDY: I’m going to wait a few moments before we go ahead and get started.
>> MEI KENNEDY: I’m so thrilled to have everybody here this afternoon. We will be discussing tele-education. And this is how you sign it, which is distance learning.
I’d like to go ahead and give a visual description before I continue. I am a female. I am bi-racial. I have brown skin. I have long dark hair. I am wearing a maroon shirt with a black background. And I have the TDI logo here at the corner of my screen.
So, today we have three panelists that will be joining us today. And I will go ahead and share their names. And give them the opportunity to introduce themselves. The topic again is covering on tele-education.
But this does not just apply to a classroom environment. This includes a workplace, maintaining certification, and so forth. The three people that will be joining us today, are Sherry, Raj, and Chris Sano. If you can go ahead and put yourselves on the screen and introduce yourselves, we will go ahead and start with Sheryl.
>> SHERYL BURGSTHALER: Hello. I’m Sheryl Burgsthaler. I go by she and her. And I teach online. So I have years of experience in actually teaching students with a wide variety of disabilities because my presentations are often about universal design or accessible design. I deliver a lot of conference presentations and I do my best to make them accessible to people with disabilities and to people from the deaf and hard-of-hearing community. I direct the DO-IT Center at the University of Washington, where DO-IT stands for Disabilities, Opportunity, Internetworking, and Technology, it’s been around since 1992. But I also direct the IT accessibility team where we are responsible for making sure that the technology that we provide on our campus to our faculty, students, and visitors and the staff is fully accessible to anyone that might want to use them.
>> MEI KENNEDY: Great. Thank you. I’d like to go ahead and turn it over to Raja.
>> RAJA KUSHALNAGAR: Hi there, my name is Raja Kushalnagar. I often like to share with the audience, I don’t care how my name is pronounced. I just care about the spelling, because in English — because English is my first written language. But not spoken.
Also, I’d like to share I am a male. I go by the pronouns, he and him. In sign languages, there are no pronouns. It’s just a pointing gesture. I am a director of the technical program at Gallaudet University. I work on providing accessibilities and resources, accessing both communication and information. I work a lot with Dr. Vicar at Gallaudet. I also teach several courses. It’s normally I do in-person, but due to the pandemic, this last year it’s been virtual. And there have been a lot of issues in regards to the internet world and the issues that come about, such as video, and I will explain a little bit more about that later in this presentation.
And I’d like to turn it over to the next person.
>> MEI KENNEDY: This is Mei. Great. And I will turn it over to Chris. Before Chris shows on the screen, I’d like for the two of you to stay.
>> CHRIS SANO: Hello. Hi, my name is Chris Sano. I’m a male. I have strawberry blond hair and a reddish-gray beard. I’m wearing a maroon-colored shirt and I have glasses on. I’m sitting in front of a gray background. So I am a software engineer at Microsoft. I am currently working as a tech lead for Microsoft Teams. And I was involved in bringing Teams to the surface hub. I did some work on the Teams IOS app and I’m currently working on the latest version of Teams that are shipping with Windows 11. In previous roles at Microsoft, I have worked on Skype for business on the Surface Hub, Office for MAC, and Visual Studio. I am extremely passionate about driving an accessibility-first mindset and product development with the goal of ensuring that products like Teams provide an equitable experience for everyone. So I’m really excited to be here today and I look forward to the discussions.
>> MEI KENNEDY: Great. And we will go ahead and start with our first question. The first question I have for you all. Wow, what a year it’s been. We have gone through so much. There have been a lot of challenges that’s taken place at different levels. I’d like to take the opportunity to ask you all if you can share an experience or situation where teleworking, whether it’s an educator or hybrid, you found it was not accessible. Could you share those experiences with us? And this is open to anybody, whoever would like to start.
>> CHRIS SANO: I’ll go ahead and start. This is Chris. So, I’ll start by saying that remote work experience has been far more accessible for me than in person. To me, it’s not necessarily about technology or the limitations of technology, but my ability to access and process multiple asynchronous information streams. So, as an engineer who works in an open space where collaboration is the norm and it’s very much encouraged, the positioning of interpreters throughout the many discussions that are happening throughout the day makes it extremely difficult to follow what’s going on. And this is especially problematic in situations where I must watch the interpreter while simultaneously being able to follow what someone is doing on their screen.
So, by the time I’m able to move my eyes off the interpreter and onto the screen, I am always a few steps behind the person who is talking. So, now, with remote working I use interpreters. No captions. I have two interpreters. One who is actively interpreting voice to sign. And the other who is ready to voice for me when needed. And they switch roles every 15 minutes.
So, we use two apps. Obviously, we are using Teams. But we also use Skype. We connect to each other, just the three of us through Skype then we call into the meeting on Teams. And there are several reasons for using Skype. When the app, the Skype app is running in the background, the primary speaker window is always on the top. So, this allows me to position the current interpreter wherever I need them to be. So, for example, if I’m in a coding session with someone, I can put the interpreter close to the code that we are looking at. And then this reduces all the eye/head movement that occurs in a person when I’m switching between a screen and the interpreter and allows me to follow things a lot closer.
That primary speaker window also allows me to multitask during the meetings. Because everyone else is checking emails, right, reading documents, writing code, or even checking the weather during the meeting. So, why shouldn’t I be able to do the same? But Teams, unfortunately, doesn’t allow me to do this. When you switch focus away from Teams it’s in the background, so having Skype is really helpful in that regard.
One of the benefits of using Skype in addition to Teams is that it allows my interpreters to be invisible this the entire process. So, a long story short, the project I was working on at the beginning of the pandemic was canceled. And I was moved to a different team. Now, I have been working at Microsoft for a very long time, and I have been on several different teams, working with a lot of different people. But this transition was the smoothest that I have ever experienced. When I meet people for the first time online, I explain that I am on a call with my interpreters, they acknowledge it, they might ask a few questions about how they can best accommodate, and then we move on. It’s very different than the dynamic, in-person, especially with smaller, one-on-one meetings, because they are not trying to get over the fact that instead of a one-on-one meeting, it’s one on three. And people also tend to get distracted by the signing. So it’s nice to not have that be an issue.
Another benefit of having a second app is in the situation where bandwidth becomes an issue. I can control the video. Teams, unfortunately, don’t allow me to do the individual video control. So, it means if I am using Teams, I wouldn’t be — it’s not possible for me to turn off all the videos except for the interpreter. It’s all or nothing. So I typically just turn off incoming video on Teams and then I have a video on for Skype. And it’s not impacted by the bandwidth issue.
I found myself in some situations where I’ve been completely killed off Teams but because my interpreters are still in the meeting, I’m able to follow everything. So, there are obviously some disadvantages. The biggest one is when I speak during meetings. The speaker indicator on Teams shows it’s the interpreter speaking instead of me. I find this to be very disempowering. I want people to know I am speaking and I want to be represented in meetings as such. This is also problematic when it comes to captioning. Especially transcripts. Because instead of what I am saying being attributed to me, it’s once again attributed to the interpreter, who is speaking. And then add that that I have two different interpreters who speak for me and it can get very confusing very quickly.
Overall, despite the disadvantages, the remote working experience has been overwhelmingly positive for me. I realize that’s not the case for everyone, but personally for the most part I’ve been able to overcome the shortcomings. This obviously doesn’t mean there isn’t room for improvement.
>> RAJA KUSHALNAGAR: This is Raja here. As far as accessibility here in a work environment, with tele-education and, of course, again, in a work environment it is at a different perspective. With smaller groups, it’s great. It’s more focused. The information is up close. There is an automatic connection between the interpreter and the language. You are able to see it together, which is very nice.
However, as it becomes much more vast and as we are starting to use more Zoom and platforms such as Zoom, it ends up breaking down, sometimes there are technical difficulties, trying to figure out who is speaking. And with Zoom, it’s hard to identify who is the speaker, who is the presenter, having to rely on chat, which can cause confusion.
Even with the controls by management, there is only one person that’s assigned that role. So, it just depends. Now, we are developing a standard process for providing control, such as spotlight, spotlighting one person, or using the feature to raise your hand and as well as having more control over how many windows are on the screen. But the challenges that do come about are the captions and the transcript. And often it’s a better solution to have to separate the two on two different devices. So, if you’re using the development for UI with an interpreter and Zoom, the transcript is great. Or it might switch back and forth. Sometimes the accommodations are not perfect in the system. Another challenge that does arise is with videos. There is not enough websites, webinar, excuse me, and it ends up being blurry or choppy. So it’s really hard sometimes. That causes an issue.
We have to think about how to work around it. But, again, when it’s a small, intimate group, it’s much easier, and with tele-education, we are having to rely on communication and the internet.
The other challenge is the social aspect of it. When the video platform shuts off. Or if there are people speaking over one another and interrupting or trying not to interrupt.
Now, with Zoom, there is not that social aspect that we are used to. And that impacts education. Often education relies heavily on socialization and support outside of academic teaching or construction.
So that’s the main thing that I could mention about this.
>> MEI KENNEDY: Sheryl, if you would like to share something in regards to challenges you have seen.
>> SHERYL BURGSTHALER: Sure, I will speak from the perspective of a common instructor. And one of the challenges is there are so many videos out there that are not captioned, and even those that are captioned are captioned by a computer and not edited, and so, we spend time in both of my roles kind of helping people learn how they can go in and caption or edit their captions on YouTube and on other platforms to make them more accurate. But also encouraging people, particularly when they are creating a little maybe introduction video in my classes, that they make sure to caption that video and really impress upon other faculty that it’s — you should have all your videos perfectly captioned, but it is —
(Audio difficulty.)
>> SHERYL BURGSTHALER: Because at least students have the chance to get the accommodations they need and have things captioned in time to stay up with their class but it’s not very welcoming the very first thing you encounter, this little video is not captioned. It’s kind of a strong statement that we didn’t invite you or expect to you come. That’s an important thing we keep pushing on.
Within our IT accessibility team, we actually try to put pressure on our departments around Washington. It’s a pretty big place. So there are a lot of channels on YouTube, for instance, that have videos from various departments. And so, we actually have an algorithm where we can check whether those captions have been edited or presented new captions besides just the computer-generated ones and we share this information with those people that are in charge of these channels, and in a causative way, by the way. But it’s like, hey, only 60% of ours are captioned now. We got to get the rest of them done because they want to get 100% without our prompting. So sometimes peer support can help.
Another thing that we do through our IT accessibility team is we provide a pool of money, a small pool of money where we can do free captioning for units on campus for what we call high impact videos, those that are likely to be seen by a lot of people that maybe in large classes that are reused over and over and maybe on some of our high-level websites. We don’t rely on that pool, by the way, for accommodations requested by students who request captions, because that retroactive, sort of, the approach is already built into our accommodation office.
So this way, you know, we encourage people, hey, we will caption your videos, we will do it ourselves or we will pay for our consultant to do it and they think, wow, you know, kind of a free gift. But part of the deal is they have to work with us. They have to agree to work with us to learn how to caption and hopefully make a commitment that they will caption their videos in the future. So, that’s been very successful as well.
I will say that computer-generated captions are certainly getting a lot better. We are certainly not done and we will never reach a point where computer-generated captions are equivalent to human beings actually doing captioning. But they are getting close, particularly in talking heads sort of situations.
Another thing that can be a frustration in my role as IT accessibility team coordinator is faculty that don’t really think they need to caption things and don’t, kind of, get it for presentations and so forth. And this extends to conferences. I have been going to a lot of online conferences because I can (chuckles). But also to spread the word about accessibility in a number of different ways. And so many of these conferencing programs that are being used are not fully accessible and they are not designed to be offering accessible content to everyone. And in the case of captioning, some don’t support captioning. And one that I gave not too long ago, a few weeks ago, I had to include the captioning feature within PowerPoint, for example, in order to get captions. Now, I can do that. But I thought it was unfortunate that that conference wouldn’t take ownership of the captioning experience, because how many other presenters would do that? I don’t think very many. I didn’t see any others that were captioning their videos. So, that’s kind of frustration all in the dimension of providing interpreters and captioners for videos.
I by the way, in my course that I am teaching right now, provide — or I do mainly asynchronous methods and if you have something synchronous, I always make sure that it is captioned and posted for people to look back later that benefits everyone, particularly if they have captions on them. So those are a couple of things.
>> MEI KENNEDY: Sheryl brought up one interesting point about education. Many people just don’t understand ASR and the quality of ASR. It really does impact a student. We need to understand that. Going back to what Sheryl was also saying about universal design, I think that’s critical, a critical term that really needs to be in the community, to design from the beginning with accessibility in mind. Even when the design is happening from the beginning and making sure it’s accessible, making sure that it’s equivalent or accessible and acceptable.
So that is a question for you all. What do you think about that access, full access? What does that look like? How do we design that from the beginning? Does anyone have an answer there? Raja.
>> RAJA KUSHALNAGAR: This is Raja. Yeah, accessibility is for everyone and not just for information or communication. But, you know, everything in between. So, often people think in the school system accessibility to academic information is the only thing that people need. And everything outside of the class, the social aspect of it, isn’t necessarily needed to be — need accessibility there.
Zoom is cool because it does provide that socialization access. And maybe providing an interpreter there.
Another thing, also, is automatic speech recognition. It’s not perfect. It works with simple vocabulary if you have great audio, a minimal accent, no background noise. But oftentimes communication breaks down because you have poor audio, accents from the speaker, and nobody to fix that or to interrupt that. There’s no human in that system to fix those errors.
And also with interpreters, no interpreters are involved. I think ASR is great, it’s nice. It’s a great solution. But, you know, it’s a support rather than a replacement. Yeah. May hear from the educators’ perspective, they don’t understand the difference in what the quality looks like. Okay. Who wants to address this.
>> SHERYL BURGSTHALER: Could I say one more word about that? How can we promote this? Sometimes I get the — I see that faculty understand what I’m talking about when I talk about the value of captions for English language learners and tell them, you know, it’s kind of a mean trick when you take someone who is learning English and you give them incorrectly-spelled words and no punctuation. Isn’t that kind of a mean trick when you think about it? They are trying to learn English (chuckles) and you are teaching them through that caption. It’s not that they don’t recognize the value to students who are deaf as well, but some of them have more students in their class who are English language learners and kind of get that. So I do promote captioning not just as something for a small minority of the population.
And then also a video on chemistry or something, it’s bound to have some really long words in it. And, again, we want all of our students to be able to see how things are spelled.
>> MEI KENNEDY: Chris wanted to say something. Go ahead, Chris.
>> CHRIS SANO: I did. Thanks. It’s a tricky question because we are so multilayered individuals and comprised of various backgrounds, experiences, identities, and so many other aspects. So, our definition of what is accessible is going to be different for each person. It’s not one-size-fits-all and that’s what makes the space so challenging, right? I don’t know if there is a clear answer for how we can make products fully accessible. But there are things that we can do to work towards that goal. For me, that accessibility first find set is a big part of driving that change and when we are designing products, especially on the feature level, we need to be thinking about how we are going to build an equitable experience.
The disability spectrum is so broad and it can be hard to think about all the different possibilities. If someone doesn’t recognize a potential problem because it doesn’t impact them personally, then they are not going to have a satisfactory solution or any solution at all for it. When you’re forced to continually adapt to an environment that isn’t designed for you, you’re going to naturally try to produce solutions that will attempt to improve things. And in most cases, those solutions end up benefiting a larger number of people than the targeted group. The very first closed captions, for example, were developed by a deaf person. They were created for the deaf and hard-of-hearing community, yet they are used today as a tool for those learning how to read, learning to speak a nonnative language or an environment even where the user may be temporarily disabled, such as high noise environment, or maybe an environment where they need sound to be at a minimum because they are trying not to wake up a sleeping baby.
So I think it starts with accessibility first might not set. And in making sure that products are designed by and with people with disabilities.
>> MEI KENNEDY: Absolutely, Raj. Before I turn it over to you, I do have a comment in regards to what Chris was talking about. And what Carrie Lou was talking about. It’s not just one person, it’s not the deaf community or one office. It’s actually systemwide change that needs to happen as Raja said. Two, we can’t just put it on one person. It’s everyone. It’s everyone. So, Raja, go ahead.
>> RAJA KUSHALNAGAR: To that point, a systemwide change. What’s cool about ASR is its limitations, with both hearing and deaf people watching captions together, they are recognizing where communication gaps are happening. And sometimes, you know, if you’re in a one-on-one situation, you know, you realize it’s not working out. And now with captioning, and accessibility there, I just found NetFlix, 40% of all the consumers use captioning pretty frequently. That’s a large percentage. And also in other countries statistically the U.S. and the UK are up to 20% of all TV watchers who are using captioning. So for many different reasons, you know, all of us can say what those reasons might be. But accessibility becomes universal, which is nice.
So, one issue as accessibility to captions happens, it does vary in some ways. It’s not — it is a lot more varied than we know. There’s a big curve there. There’s the largest number under the peak statistically. So, a solution for the outer parts of the tail, statistically, are the outliers. So, we need to think about them and design for them.
So, that’s what I have to say about captioning.
>> MEI KENNEDY: Mei here, what I want to talk about with Raja, what he just said, think about accessibility and designing it for the environments and not only for different environments, but small groups might work, something might work well but might not work well for a larger group. You do have to think about accessibility for everyone and not just in one environment. You have to think about those various environments and what the impact would be in that environment and what that looks like. So, you’re right, Raja. Yeah.
So, Chris, I’m curious. You talked about your experience using different platforms at the same — to make your work environment accessible. Sheryl, Raja, maybe you can add. But the platforms you are using, how is that design to be accessible with keeping accessibility in mind for those platforms? In a professional or educational environment. What does that look like?
>> CHRIS SANO: Yeah, so for those of you who don’t know, education is one of several different verticals in Microsoft Teams, and at its core, it’s the same code base. But it’s been modified to adapt to the virtual classroom experience with an emphasis, of course, on classroom collaboration.
The feature that is most relevant to this discussion is the same across all the different verticals, and that’s the meeting experience. So, in terms of accessibility for the deaf and hard-of-hearing community, live captions are currently the only feature that’s available. We do have the hand-raising feature, which brings more attention to those who wish to speak up during meetings. And we are currently working on providing CART support that will allow meeting organizers and participants to request that their captioning service provider streams captions to Teams.
But at the end of the day, while it’s great that platforms like Teams are providing tools that help with accessibility, it is ultimately up to the meeting organizer to make sure that their content is accessible. They own the content. They should make it accessible.
>> MEI KENNEDY: Thank you for that, Chris.
Sheryl.
>> SHERYL BURGSTHALER: Yeah. I’d like to add to that, one of the things in helping these companies make their products more accessible is an ongoing effort through their IT accessibility team. It’s not just a do it and you’re done effort. It’s, like, the two best examples I have lately are Canvas and Zoom. In both cases, my IT accessibility team evaluated a number of products and they did not vote for those two products, by the way, for our campus to adopt. And once the University of Washington adopts some piece of technology, you can be pretty sure that all the other postsecondary institutions will follow. And so, we put a lot of pressure on ourselves to do a good job.
So my team said, you know, the products are not very accessible but the group decided that they wanted to purchase those products or license those products. And so, once that happens, then my team continues to work quickly, but the most immediate thing we do is to get in the contract itself before it’s signed, a statement that the company agrees to that they will work with our IT accessibility team, my team, continue to evaluate and improve the accessibility of their software. And sometimes they will tweak that wording and stuff but just getting something in the contract that says they will connect with us. And they usually welcome this eventually, not at first sometimes. But in both cases with those two products, there are other campuses that have standardized on those or are using those or are thinking about it.
And so we bring them together in our team. For instance, in the Canvas case, we have a Canvas class set up where over 100 IT support personnel at postsecondary schools of all sizes engage in discussing computer accessibility issues. And we like to call those bugs since we know what a bug is. It’s a mistake. It’s an error, you know. It needs to be fixed rather than just a little special thing you have an add-on later on which is the way some people think about accessibility.
So, they regularly will report those bugs and they will prioritize. And there are several people in that room that are from in structure, which is the company that produces Canvas. And they are very supportive and they love to point to our group when they are selling their product to other schools. But the key there is that, you know, you’re only one version away from being inaccessible no matter how accessible their product is that you selected. Because we know these companies are updating their software, it seems like almost weekly. And you can’t let go. You have to keep testing and make sure that you’re reminded if they break things, which they tend to do quite often, that they break things, and often that’s the accessibility features because they are not testing for those as much as they might be testing for other things.
I think it’s critical that we get these communities together. You can imagine in structure paying a lot more attention to 100 people in that class or course framework than it would be just the University of Washington talking to them. Where often they might say, well, nobody else is complaining about this.
>> MEI KENNEDY: This is Mei. Sheryl, that’s a great mention of the university and a corporation and how they work side by side. And I think that’s going to be great for the future.
Now, with that type of research and with the experience, now, are you planning to release that to the public? Is that going to be published? Or do you know of someplace where they can share that type of information and share the value of the research and what’s worked and what has not? And to be able to sponsor and really advocate for the change in accessibility for those products.
>> SHERYL BURGSTHALER: You know, sometimes we will do a little research study or whatever, but we never identify the web pages we are evaluating or whatever, but we might be reporting the value of training or whatever. One of the unfortunate things about accessibility testing is the attorneys that represent our postsecondary schools get very nervous about sharing those results. And I, actually, do as an administrator as well, because if we, for instance, let’s take Zoom or Canvas. If we reported today what the accessibility problems they have right this minute, we have a great relationship with them. They are working on it. It’s on their task list and it may be fixed next week.
So, we are now sharing information that’s not true anymore. So, it’s such a moving target, that we — most postsecondary institutions will not agree to do that. So, there’s a lot of retrofitting and reinventing the wheel. That’s why these communities kind of have a community of practice idea where there can be a lively conversation but you’re not publishing it on a website somewhere, have a lot of value.
>> MEI KENNEDY: I understand that. And I think — this is Mei here. I think that’s great to share that information. That’s understood.
Raja, if you’d like to speak to that.
>> RAJA KUSHALNAGAR: Raja here, that’s right. With software changing accessibility over time, there’s an example and sometime in the past, Zoom used to have a phone number to be able to be called in and you can use the VRI, video relay interpreter, and have the interpreter in the meeting. And I think at some point it stopped. And there was no way to access that. And the interpreter was unable to participate in the meeting by using video relay interpreters.
There were several complaints and about two, three months later they eventually added that option. And that goes the same with automatic captions. Having to be on a paid plan but it was not included in the free plan. And so, that became an equity issue. Having to pay for accessibility, I mean, does that sound right? Is that a good approach? I don’t know if it’s a really good idea.
So, you know, most would know that. But, of course, there are a lot of platforms that have their own challenges. So we’d like to experiment with other platforms as well and getting creative with UI design and work collectively. And I wish that we could include the deaf perspective and include those ideas that are really cool. But because of the accessibility fully not placed — put in place, then I don’t see it happening. So there are challenges that we are having to really move forth with and figuring out the new option that would provide the accessibility at the same time.
>> MEI KENNEDY: Think Mei here. I do agree with that, with both Sheryl and Raja. It’s important to have a statement — or excuse me, the perspective of the deaf community to have their input and it’s important to be able to see what the future looks like. Would it be more of a hybrid approach or what needs to change? What kind of changes do we need to see? So I’m curious. What are your thoughts? Sheryl.
>> SHERYL BURGSTHALER: I wanted to pick up on one thing that Raja was saying, and that is — or I think it was Raja. Or maybe it was Mei. Anyway, that they charge extra, some conferencing software and other software will charge extra for accessibility features. Just like, huh. I can’t believe it. And someone asked me, is that illegal? I’m not an attorney but it’s not my understanding that it’s illegal to create inaccessible products or to charge for whatever in your product. But what it is — it does violate our, at least the spirit of the Americans with Disabilities Act to be using inaccessible products. So, we should at least, if we want to use a product that you have to pay extra for accessibility, which I really don’t like that idea, but we should buy it.
>> MEI KENNEDY: Understood. Understood, Raja here. That’s understood. Mei here. This actually is in regards to our next question. I might shift for a little bit. So I’m going to hold that. But for the next question, who is responsible for providing accommodations, providing accessibility? Is the school responsible? Is the tech company responsible? And the costs that are involved, what does that look like, and those accommodations? Have you — do you have any thoughts on that that you would like to share. Raja?
>> RAJA KUSHALNAGAR: Raja here. Also from a legal standpoint, in general, it just depends on the legal rules and that it’s managed. For better or for worst, there are a lot of different ways that have barriers in place for education, such as K through 12. IDEA. In a workplace with ADA and also there are some states that have different regulations as well.
So it’s pretty complex. But in general, it’s pretty easy to follow the ADA organization and what it represents. And the cost for accessibility should be illegal. That’s all I have to say.
>> MEI KENNEDY: Mei here. Sheryl, go ahead.
>> SHERYL BURGSTHALER: In most cases, it’s whoever is offering the service. So if you’re offering an online course, you, a university or some private organization or whatever, you, that organization, are responsible. Sometimes in a postsecondary institution, people say, well, that department is responsible, not us. The university is responsible, period. That’s where you’re going to have to reach compliance. You can delegate some responsibilities to different units on campus, but it doesn’t relieve your responsibility.
Kind of think of it back in the day when people were arguing whether in an onsite conference if it’s not fully accessible to wheelchair users, Who’s fault is that? People used to come to me, not so much now, and say, well, it’s not our fault. It’s the hotel, right? Well, no, it’s your fault because you are the one that contracted with that hotel. So, you have to take ownership of what you purchased or what you have agreed to use.
>> MEI KENNEDY: This is Mei here. That’s a very interesting point that you have made, Sheryl. And previously you had just mentioned about university assessing their products that are being used. So, this really is accountability and we really have to assess that situation and make sure that it’s met the student’s needs, which is very interesting. Raja, I see you have your hand up.
>> RAJA KUSHALNAGAR: This is Raja. In regards to that, there’s the concept, it’s the center budget. And it’s at a high potential level at a universal level. And you don’t bring it down to a department level or research. I have personally experienced that as well and it’s very challenging.
At a lower level stance, there’s a lot of barriers due to because the budget providing for interviews and all that Intel. And there’s a lot of unfortunate cases that take place because the budget does not support that. And it impacts those that are involved.
And there are some things that are absent in the legal realm, both theoretically and in practice that are applied. So, it’s really — it’s a really hard part about that.
>> MEI KENNEDY: This is Mei here. So, with that, I’d like to go ahead and share my next question. So with the future of education, with instruction, what does that look like? Are we projecting that it mate be more hybrid, in person? What kind of challenges would take place? What do you foresee? I can see that everyone is sitting and thinking about that. Raja, go ahead.
>> RAJA KUSHALNAGAR: This is Raja. The hybrid approach and how to instruct, both hearing and deaf, whether it’s face to face or online can sometimes — well, today, nowadays, the software doesn’t have the quality of — to provide that service. But it’s not due — for most classes. Excuse me. And there isn’t a solution. At Gallaudet, it’s forced to either do face-to-face instruction or virtual. But we haven’t done both. We know that there is software that provides hybrid education.
>> MEI KENNEDY: This is Mei here. Sheryl, or Chris, do either of you have something to add to that.
>> SHERYL BURGSTHALER: I think there are two ways people are using the hybrid terminology. One is I think what Raja is referring to is you offer a course fully online and fully on site, kind of, at the same time so people can be online learners or onsite learners. I’m not sure if that’s right, Raja. But anyway, the way I’m using it for what I’m just going to say is just that I taught fully online, I have taught fully on site and I have taught hybrid and in that case, you have an onsite class at least part of the time and you have online components to it. And I much prefer the hybrid because I think if you organize your course right, you can get the benefits of online learning and of on-site. And often people will talk about discussions and interactions. When you — as being a value in onsite. When you think about it, if you have a group of 50 students and you’re carrying on a conversation with the whole class, not everyone gets to talk. You don’t have enough time. And also if one person answers a question and then another person you call on might add to the answer you’re not really kind of testing them on what they have learned independently, they are piggybacking on what someone else learned.
I love that classroom interaction. I think it’s really important. Small groups and so forth. But one thing I really like about online conversations, using a typical discussion board offered in any learning management system, you can have a discussion where you pose a question and ask each student to answer it, but they can’t see that their — their peers’ answers until they post their own. Then when that happens, that triggers, they can see all the other responses. And then you can, as an instructor, require and they look at their peers’ reactions or, you know, responses to the question, and maybe require them to respond to two of those others.
When you think about that, everybody in the class has been required to participate actively and to pay attention to the responses of their peers. And the other advantage is if they are an English language learner or for some reason have difficulty composing their thoughts really quickly, they can take all the time they need in that online discussion.
So I have onsite discussions and online every week. And I really love that combination.
>> MEI KENNEDY: This is Mei here. Great. The best of both worlds, like you, said.
Does anybody else want to add anything? Raja?
>> RAJA KUSHALNAGAR: Raja here, yes, I do have something to add. That’s right. The type of opportunities should be accessibility. You know, the best of both worlds. The social aspect, the face-to-face learning environment, and accessibility online. And, you know, to hide the weakness of each of those. Yeah.
So, one thing about language access too is English accessibility and signing accessibility. So, I think when — okay. A video or you’re typing something for a post or whatever the software is we are using, I think we need to go through the practice of that.
>> MEI KENNEDY: Mei here. What do you think about creating content and pushing those at the university level? What does that look like in different environments for you? That will just become who you are as a student or an employer. And being mindful of what that looks like at each level and doing that from the beginning. Again, going back to universal design, designing accessibility from the beginning, I think that’s a great skill set to push, then to start thinking about and to teach about.
I have one more question before I wrap it up. In the audience, if you have more questions, please pose them. What about virtual games? That’s a hot topic right now. How do you view virtual gaming? How does that impact the education system and how they use the virtual world, how do you see that? And virtual gaming itself, is it accessible? And it’s not, there are some challenges we have seen with tele-education in the same ways that apply to virtual gaming in that world and how that changes in the virtual gaming world, how that might impact the educational realm. Raja.
>> RAJA KUSHALNAGAR: Raja here, yeah, gaming. That’s nice. It does provide a motivation to accommodate modern — or today’s generation to make sure that they are inclusive.
And if it’s — and to make sure — and they can help with that. And sometimes good design doesn’t work. So, the emphasis on gaming is socialization. I think supporting that is great. Also, with speech recognition in the gaming realm, that is getting better. But where do you put captioning on that? Or do you add an interpreter? Where would you put them on the screen?
So that’s the really — the largest challenge we see in a classroom. And when we are trying to use those and maybe using multiple monitors. We haven’t found a solution to these issues yet. So —
>> MEI KENNEDY: Mei here. Sheryl, do you have anything to say about that question I posed? Sheryl? Nothing.
>> SHERYL BURGSTHALER: Sure. I direct a project called access cyber learning. Cyber learning being primarily used by the National Science Foundation for any learning opportunities that make use of digital technology. And our project called Access Cyber Learning has been to work with other projects that NSF has funded and these projects have been to create learning opportunities for students with disabilities, primarily K12 and many of the gaming. And the one thing I can say about that and we had several meetings about it with these people and a community of practice. Very few of them are considering the accessibility of any type, certainly not universal design. And so, I find that very frustrating because that would mean a lot of the games in the future are not going to have accessibility in mind. And so I always promote with these researchers even if they get a small grant, they might have a $300,000 grant, which is not very good and it might be piloting something. Great, if you’re going to pilot a game and you’re not going to consider some of these issues we are talking about, like captioning or speech output or being able to operate it with a keyboard alone and so forth, at least they should identify that as a limitation when they report that research and talk about how it should be considered in future research.
Most of them don’t even identify that. And at one meeting that I went to about this, it was said, well, we can’t expect all of these researchers to think about these accessibility issues. And I said, you know, in this day and age I have a feeling you wouldn’t fund a project if the proposer said, well, you know, boys like games more than girls. And so, we are just going to test this with boys and just leave it at that. Yeah. We would be silly. We wouldn’t do that. We used to do things maybe with textbooks back in the day.
Anyway, it’s frustrating to see that. So, we have to continue to push for a universal design that includes people who are deaf and hard of hearing but many other people as well, including English language learners or people with disabilities or whatever. Everybody.
>> MEI KENNEDY: Mei here, yeah, that’s true. Yeah, we do need that push from the beginning, with the designers. It’s a great comment.
I have a question for Sheryl from the audience. Let’s see. How do you communicate with hard-of-hearing students and accommodate them? What accommodations do you have? Assisted listening devices?
>> SHERYL BURGSTHALER: Yes, I would say that would be the first —
>> MEI KENNEDY: How do you sensitize a professor for that? How do you ensure that accountability is there and that’s a big challenge you see at different universities. I’m sorry if I’m throwing so much at you, but, yeah.
>> SHERYL BURGSTHALER: That’s okay.
>> MEI KENNEDY: That’s the question for you, Sheryl.
>> SHERYL BURGSTHALER: I am pretty passionate about all this stuff. I’m not in the clear, I’m president the director of accessibility technology services. That unit that provides accommodations. But we work extremely close with them. And I do know they look at each individual student and the most important thing they consider besides documentation and so forth, like they have to do, but is what the preference is of the student. So, assisted living — listening devices could be the responsibility of the institution but many times it’s the responsibility of the student to provide as a personal aid. So that’s something that they work through. Again, I’m not the one that does that, so that’s it.
>> MEI KENNEDY: Mei here. I just want to make sure that that question that was asked, you know, how do you communicate with students to let them know they have accommodations? How does that work? How do you make sure the professor knows, oh, they have accommodations that are needed. That’s the question. The students have the issue or are it the professors? Who is in charge there?
>> SHERYL BURGSTHALER: If we are talking about higher education which I’m part of, K12 education, the counselors and teachers, and the parents have significant roles to play. But not in higher education because we are working with adults. So, it’s the responsibility of the student on most campuses to go to the disability services office, which we have to have such a place or at least a person that they can go to. And show any documentation that — about their disability or whatever, whatever is required by the institution. And then request accommodation.
And then it’s the university’s responsibility to provide a reasonable accommodation. So — and then if that can’t be negotiated, then there are processes for, you know, issuing a complaint or whatever.
Once an accommodation is agreed to between the person with the disability and the person from that office, that letter goes to faculty members with the student’s permission, by the way, and they tell the faculty members what are the appropriate accommodations for that student.
We’d like to remind faculty, though that most students with disabilities do not register with that office. For a lot of reasons. Personal reasons. It’s a personal choice. But sometimes it’s because they are worried about the stigma or they are afraid that the faculty member might discriminate against them if they find out, for instance, they have a learning disability and maybe the faculty members have misconceptions about that. So that’s unfortunate. This really promotes this idea of universal design where faculty at least build in some accessibility within their course, like captions, for example, proactively rather than reactively.
How do students know about this? We published it on our website, in departments, but also the main website. It’s probably listed in application forms and so forth. I don’t know exactly where that might be. But we do make that known. And in theory, at least K12 students learn about that through their IEP plans and their section fave zero four plans before they graduate from high school and know they need to find out where that office is or that person is that they need to talk to at an institution they are thinking about attending.
>> MEI KENNEDY: That person said thank you so much.
Any more questions from the audience for our panelists before we wrap up? I think we are okay to wrap up. So thank you all so much for your time and for joining this panel. Thank you, audience, for being here. And thank you, TDI, for your sponsorship sponsoring this conference. Thank you so much. And I’ll see you all tomorrow.
>> CHRIS SANO: Thank you so much for having me.
>> SHERYL BURGSTHALER: Thank you.
tele: HEALTH (breakout)
Lisa Bothwell, Mei Kwong, Suzy Rosen Singleton, Mike McKee, and Matt Myrick
Transcript
>> MATT MYRICK: Please make sure to identify your personal appearance, what you look like if you have brown hair, glasses, what kind of shirt you’re wearing. Mike, maybe you can say you have your doctor’s uniform on. Mike saying yeah. Show you have worn my white coat.
>> MATT MYRICK: We’ll go ahead and get started. I would like to thank everyone for joining this session, this session regarding TeleHealth. My name is Matt Myrick and I wanted to —
>> We need the interpreter. Matt needs an interpreter.
>> MATT MYRICK: Thank you all for joining this conference. This is a TeleHealth session. We have four individuals on the panel, myself. I am Matt Myrick with the TDII member at large. This panel, let me self-identify for those that are — we have participants on the audience that are deaf-blind. I have brown hair, wear glasses. I’m wearing a blue polo shirt. I have a TDI logo on my left chest. Next, I would like to hand it off to Lisa.
>> LISA BOTHWELL: All right. Hello, everyone. I am Lisa Bothwell. I am a Caucasian woman in my 30s. I have short hair. I’m wearing a black shirt with a black jacket, business casual. I work as a manager for Community Life, ACL. And the goal of the initiative ACL is to support people with disabilities and elderly adults living in their homes and in their communities. So my area of expertise is policy-related review. So we review different policies and do development within those fields. So with that, I will turn it over to the next person.
>> MEI KWONG: Hello. I’m Mei Kwong with the policy.
>> Sorry. We need the interpreter up.
>> MEI KWONG: I’m Chinese. I have long dark hair. I have on a blue dress with leaves on it and a pair of hoop earrings and I wear glasses. I think I’m the only woman on the panel wearing glasses. It looks at TeleHealth policy on the federal and the state level.
>> MATT MYRICK: Suzy?
>> SUZY ROSEN SINGLETON: Hi, everyone. My name is Suzy Rosen Singleton. How to describe me. I’m wearing a black jacket with a neck lease. I’ve got my hair up. It’s blond. I’m in front of a blue background that’s very plain and I work at the Federal Communications Commission in the consumer and governmental affairs bureau in the disability rights office and I am the chief of the disability rights office focusing on video programming, modern communications, and emergency communications access for all three of those areas and we collaborate with other bureaus in our agency on those. I am here today also to share some of my personal experiences in a wonderful success story of myself and TeleHealth access. So I am hoping you share that as well. Coordinate very closely with Lisa and we worked together in the federal I. agency accessible TeleHealth working group to make forward progress and ensuring that DOJ HHS and all those agencies partner together to protect your rights to have accessible TeleHealth. Now, I will pass it back to Mike.
>> MIKE McKEE: I agree. Thank you so much. My name is Mike McKee. So I will identify mine. I’m a hiss panic and Caucasian, I have brown Auburn hair. Currently, I’m in my home office. I have a blue and white striped shirt. And I’ll explain a little bit about my role. I’m a deaf family medical physician. I work at the University of Michigan in their department of family medicine. I work as a physician there and I’ll talk about my experiences interacting with in-person and now moving into a more virtual sphere.
So I look into clients — we do investigations into some of our clients and patients. So I’m looking forward to having that discussion with this panel.
>> MATT MYRICK: Okay. All right. Thank you. Okay. So let’s wait for the interpreters to come back on. Thank you. And so let’s go ahead and dive right into the panel discussion. I know this has been a hot topic, you know, with the pandemic that hit us last year and there are lots of questions regarding the TeleHealth issues, et cetera. So I would like to start with the very first question for Mike. Can you provide a brief description of TeleHealth and what exactly it is and how it is being used?
>> MEI KWONG: TeleHealth really just means using technology to provide health care services when the patient and the provider who is providing the service are in the same location. So they use technology to bridge that distance. And the types of services that it’s been used for vary from specialty to specialty. You can have some services. Some specialties which they can use TeleHealth a lot for a lot of their services and then you have other specialties where maybe they have a narrower range of using the technology to provide those services. A lot of times it really is left to the provider’s judgment on when to use it in consultation with the patient because you can have situations where you have two patients being treated for the same thing but maybe technology isn’t the best way to provide a service for one patient as it is for another. So even though I’m a at the time health proponent and advocate, even I say it is not appropriate for every single situation, you go it should be available for anyone who may. To have those services provided via technology.
>> MATT MYRICK: Thank you. Thank you, Mei. This next question is for Lisa. And can you expand on the requirements for accessibility and for TeleHealth providers?
>> LISA BOTHWELL: Sure. This is Lisa speaking. So I’ll focus now on three points of — three legal miles, but they might have other applications, but I will focus on three things. The ADA the Americans disability acts. Section four is the rehabilitation act and section 1557 of the patient protection and affordability care act, which is the ACA. So many of you are already familiar with the Americans with disabilities act. There are two types of — types state and local governments which are public entities. And then tier 3 which would apply to places of business accommodations, professionals, offices, else had care provider offices, hospitals, social service centers, establishments, insurance offices, pharmacies, so forth. Section 504 applies to entities receiving federal funding assistance and also to executive agencies, federal agencies. Section 1557 applies to entities receiving federal funding, assistance, and entities that are covered by the ACA title 1. And that would typically be through the state-based insurance marketplace. So those three are the legal aspects that we’ll be talking about. Using the things that the DOJ mentioned last year and the context of health care non-discrimination based on disability meaning equal access to available healthcare services regardless if those services are provided in person or through a virtual platform. So an example of that would be TeleHealth or telemedicine. So TeleHealth we have TH and TeleHealth we abbreviate TM — and telemedicine would be TM. That would have accessible information and communication technologies. Effective communication would mean communication must be effective for people with disabilities on an equal level as it’s effective for people with disabilities.
An interesting thing about effective communication includes accessible information and accessible technologies and the definition of auxiliary aids and services. Let me back up just a second. Health care providers are responsible for providing auxiliary aids and services which we call AAS. Typically you might be familiar with interpreters captioning services and a variety of alternate formats and so forth. So I want to provide two resources. I’m trying to go ahead and wrap up. I want to provide two resources that have been released — they might be related to this audience, pertinent to this audience. So it’s under the ACA is under HHS. Under HHS is the office for civil rights, the OCR. And the OCR has released two bulletins. One of those is regarding civil rights requirements during the pandemic during COVID. And in the past, talk about accessible information and technology and now include those in the chat, if anyone is interested in reading out more about those two documents. And I want to take this moment to provide some information about how to file a complaint. There are two places that you can file a complaint. One is with the HHS office of civil rights, the OCR. Or through the Department of Justice, DOJ. The DOJ’s disability rights center. So I’ll also include those two links for more information and put that in the chatbox. With that being said, I’ll turn it back to Matt.
>> MATT MYRICK: Okay. Awesome. Thank you, Lisa. Can you elaborate — the question is back for Lisa. Um, interpreter? Yeah. So again, Lisa, can you clarify what effective communication is and provide some examples around effective communication?
>> LISA BOTHWELL: So as I said before, effective communication is an entity if they’re under one of those three tiers or another civil right law and it has to ensure that communication with people with disabilities is as effective as it is with others without disabilities. So effective communication can mean that the care provider is responsible to provide those auxiliary aids and services which include interpreters, qualified interpreters. It can include captioning, C.A.R.T., which is real-time captioning. It can include alternate formats. For example, Braille, other formats. And I think that’s —
>> Interpreter: May I add to that?
>> MATT MYRICK: Yeah. Suzy?
>> SUZY ROSEN SINGLETON: Hi. This is Suzy speaking. Once upon a time, I was a litigator in California and there was a case that I litigated against Etna hospital that would refuse to provide interpreters for a spouse of a patient who was on life support and comatose and they needed to disconnect life support, but they had said that because the spouse was not the patient, they refused to provide accommodations there and I was at the California center law center at the time, but the ADA had just passed. This was in ’91, but there were no regulations yet promulgated on that. So we ended up going to the 9th circuit court to discuss what effective communication meant in the absence of regulations and the communication was published then to mean requiring interpreters for complex communication. So it really does depend on the communication itself whether it’s writing or complex, whether it’s significant or minor, there’s a number of different fact ears that go into that determination and writing back and forth could be considered in some context communication, but it depends on the details of that case and the situation itself. It is a very complicated concept. So it is very individualized and fact-based depending on the requirement as well — the environment as well. Basically, from the 1990s to today, the interpretation has been that effective communication is required — that the individual with the disability feels that it is effective. For example, if Euro is a TeleHealth — if you’re on a TeleHealth appointment and you would prefer an interpreter, then you should have that type of accommodations. Effective communication is very idiosyncratic and independent of the individual’s needs. There is a whole sweet of options and you have to ask the patient and the patient’s member. That can have been clarifying what effective communication with the lawyers working in a lot of gray areas here is trying to nail it down.
>> MATT MYRICK: Yeah. Thank you, Suzy. Not one size fits all category. Depending on communication modalities. That’s very important.
Okay. So moving on. This is the question for Dr. Mike and why are TeleHealth platforms not accessible enough? And what feature would make the TeleHealth platform truly accessible for all? Can you share some insight on that, Mike? Dr. Mike?
>> MIKE McKEE: Sure. Thank you so much for the question. So the platform there is a variety of platforms out there. For example, we have video, zoo, a variety of different platforms available. Some of those reports are accessible. It is for hospitals, doctors, and institutions or systems when they contract maybe they are looking for the cheapest option, not the most advanced. So there are — when we have those platforms available, there are some limitations. For Zoom, you can have live captioning as a feature being of automated live captions or if you would like to have a 3-way meeting. We have a video that would be a simple call to a doctor on video. I can see the doctor and the doctor can see me or if you would like to add a third 3-way meeting, you would have a third video interpreter involved. And then they would have to be contracting in advance to have that available. It’s not a question of the technology is or isn’t available. It’s that is the institution, the staff, the doctors, the health care system, do they know about or they know about or can those technologies deliver effective communication? What happens is sometimes you’ll have technology available and — excuse me. It’s not available when it is widespread in the world, but it is not available in certain sectors. We want to advocate and fight for having that accessibility available for everyone consistent across all populations. So we would like to have a one-mandated experience for everyone. There are complexities that are involved with that. For C.A.R.T., sometimes it’s not Zoom live or on Zoom live, there are some errors when the transcriptions, you can ask for C.A.R.T. instead. It includes a link, some staff or physicians report really knowledgeable about going to a link to connect to the C.A.R.T. services. So it’s important we educate them and make that transition easier and also more secure. I would recommend that we think about the questions to ask and one of those questions is not really the cost. First, we should think about the availability if it is accessible for deaf and hard of hearing and all populations and the equity, the equal access. For example, for hearing people other maybe the ease of use is really deaf populations don’t have the same experience. We’re really fighting for that.
>> MATT MYRICK: Okay. Thank you. I’ll wait for the interpreter. Thank you, Dr. Mike. Suzy, I know that you had some experience in this area. Can you describe what the type of inaccessible TeleHealth looks like to you?
>> SUZY ROSEN SINGLETON: Sure. Thanks, Matt. This is Suzy speaking. And I’ll pick up on what Dr. Mike was just saying as well. You’re right. There are many different possible ways to be communicating with your health care provider, but we’re focusing now only on video platforms and not audio-only platforms for two-way video platforms. In March of 2020, I got back home from Sun Valley, Idaho which is one of the kinds of hotspots for skiers. Neigh had an outbreak of COVID and when I got home, I was experiencing some symptoms with a high fever and sore throat and was very, very concerned and didn’t want to go to a hospital and those other people, I didn’t want to be exposed as I think at that time in March, many of us were in a state of panic because of the unknowns of the pandemic. We didn’t really know the results at that time. So I wanted to stay home to the extent possible. So I have an app for the TeleHealth portal that I attempted to use for an unscheduled appointment, not for a regularly scheduled appointment. It was at 8 o’clock at night and my fever was climbing. I wanted to see what I could do to treat myself. I went into my local app, but there were no questions there for how to request an interpreter. I decided to go into this waiting room and there were many doctors also that were attending there. And I was still on the lookout where I could request an interpreter and eventually, my call was taken by the first doctor and then I said I told him I was deaf and I’m trying to get an interpreter and there was no way to pull that in particular for an unschedule appointment to pull in an interpreter to that. So I struggled to communicate. I think that they told me to take 8 Advils every 2 or 3 hours. I did that regardless of whether that was exactly right, but I was still very concerned about my understanding of this doctor’s instructions till the next day, of course. I was still ill and at home. My fever went down a bit. I ended up going to the hospital after all. I tested negative. It was just the flu, which was both good and bad, you know. Of course, I was still then able to catch COVID, but regardless, I reached out to my local TeleHealth director, the person who runs that system that uses the app and we then worked together to try to develop a third party plugin for their platform. It’s a really large health care network in D.C. So they luckily had the means for doing that. So basically they developed a platform that has a front end with instructions on it that asks if you need an interpreter, instructs you where to click, and if there are any other accommodations you need, you should click elsewhere. So, um, if you do need an interpreter, then you would click and be directed to a new video window. And the interpreter would arrive then within 1 to 2 minutes, very, very rapidly, they established a contract with interpreters so that the whole platform would display the doctor, the interpreter, and then myself as well as a chatbox along the side. So we were able to have a discussion there and really communicate for unscheduled appointments. I was a tester for that. I didn’t have another emergency situation. I was helping them test T. it was very nice they were able to establish that. I asked them what they did and they said they developed an organic solution using blue stream and contracting with Amwell. So looking into it further, there are four large TeleHealth platform providers. There’s Amwell, Teledoc, Doc on-demand, and MD Live. So it is important to consider how to reach out to them because they need to have platforms that are — that are made with different options, right? The options for captioning or interpreter or even another video for a caregiver, right? If you have a person with a cognitive disability that needs a caregiver, which is why this type of disability on the side is something the provider themselves may not be able to handle because they are provided with a product that they just received and that product needs to be made accessible. There are things that need to go into that, but they need to coordinate. Another important aspect of that process is not just technical, it is the training of personnel. So my local provider explained that they then required every single TeleHealth provider to go through a 30 to a 45-minute initial training session and then another 20 minutes of testing. And on a monthly basis and quarterly basis, they do tests and training. So that way all of their TeleHealth providers are versed in working with accommodations and pulling in interpreters for example on these unscheduled appointments. So that’s where — that was my experience I wanted to share that success story with you, but I wanted to mention a few other concerns and considerations. I don’t know whether that particular platform I had asked and made sure they should be aware of that whether those things are hearing compatible. That’s where the vendors need to have their own checklists so that is they have all of the different accommodations they could need to provide in mind. So, you know, certainly, there was lots of work left to do and I’m looking forward to everyone continues to work together to make sure that happens.
>> MATT MYRICK: Okay. Wow. Thank you, Suzy. Again, I’m just (inaudible) this is a follow-up question that I wanted to ask you. So you have experienced a transition from inaccessibility to TeleHealth. You can describe the service that you currently use and what it looks like and what do you find more helpful?
>> SUZY ROSEN SINGLETON: Yeah. I mean, that portal I believe meets my needs right now. I have seen a lot of different people using different portals and systems and really we shouldn’t be burdened with the obligation as individuals to tell people what we need or should be instructions available on kind of the front page when you’re going into the portal or the app. It gives you the opportunity to request accommodations upfront. You shouldn’t have to go searching for it not only that but for language as well. I have seen some that have Spanish options as well. So the perfect portal, I think, would have a very well thought out, very well tested front end and well-trained personnel and that’s something that we still haven’t quite seen. I’ve been asking around for other people’s experiences and it has varied widely. And that’s something that I think should be accomplished.
>> MATT MYRICK: Okay. Thank you, Suzy. The next follow-up question is for Dr. McKee. What have you heard from your hearing, non-signing colleagues regarding their experience trying to serve their deaf and hard-of-hearing deaf-blind facial using TeleHealth? And what would help to improve their experience?
>> MIKE McKEE: Thank you for that question. This is Dr. Mike. My hearing colleagues, one of the challenges they mentioned is that everything right now is new for everyone. There are a lot of changes that happen. A lot of people are learning on the fly trying to accommodate the needs of people through virtual health platforms. And that’s not to say that — that we’re not prioritizing accessibility, but education and training need to happen. Those things will happen as soon as possible, but now we have to open up different possibilities to connect with payrolls and I’ve used — with patients. Sometimes patients will be tech-savvy. They can use the portal and I emphasize to them that they may have an access age platform, but maybe the patients aren’t aware of what to do, or how to use it. So it’s not accessible the technology part. It’s still a struggle for them. For my hearing colleagues, one thing I noticed — I’ll discourage you for this. So let’s say a deaf patient tries to connect or a physician tries to connect and realizes they’re a signer and then they want to connect through VRS. Unfortunately, VRS interpreters may not be medically certified. They’re not really a skilled medical qualified interpreter. We need to reach out to qualified medical interpreters who have education in that field and who ensure that — we encourage — there’s more work to be done in that aspect, but some people do complain that it’s hard to do that VRS interpreter is ready. They can be contacted immediately, but I really try to explain to them they’re not medically certified. There’s also, you know, video is rich in data as a place — suppose we reach out to a patient and we can see their background and their home environment. We can see their presentation. Are they struggling? Are they having shortness of breath? Through VRS, there is no information we can glean from what we’re seeing from the patient. That’s not equally accessible and that’s a risky approach too. There are risks. Explaining medical terminology to the interpreter and their interpretation may be incorrect. So sometimes it can be easy. So the relay is easy to try to reach out to initially, but really they can be communication breakdowns when trying to use that. Speaking of — we have C.A.R.T. here available. So let’s say there’s a link — there’s a separate Zoom link for C.A.R.T. and I will ask to you go to that Zoom link. Sometimes that isn’t efficient or smooth. That’s something that needs to be worked on. Live is new because live captioning was (inaudible) with errors in the past. The hard-of-hearing community really wanted live captioning included. It was something they advocated for, but it’s — there are limited options when trying to set up C.A.R.T. and live captioning. So there are pros and cons to the approach. It’s a different world out there and we’re not blaming, but we want people to be creative and find different ways and strategies to make sure everyone feels comfortable with the technology. From here on out, this is going to be a few worlds. So we want clear training and make sure everyone — there’s a standard and so we — so like Suzy mentioned, there will be training required like 45-minute training think right now there is no training for doctors that I’m familiar with. Some people may be familiar with technologies and maybe more tech-savvy as opposed to more seasoned physicians. Unfortunately, that can impact the patient experience. And sometimes deaf and hard of hearing patients know that COVID and the pandemic are stressful and they tend to accept what is given to them. I would encourage them to speak up and make sure equal access is on the same footing with the hearing community.
>> MATT MYRICK: Awesome. So this next question I have is for Mei. Mei, who should be a sponge for insuring that TeleHealth services are accessible? Is it doctors or the technology company that developed the platforms or some combination of both?
>> MEI KWONG: I would say it’s both and part of it is legal — a legal answer and part of it is also that they simply should do it for a variety of reasons. The legal answer is that doctors and providers need to accommodate their patients for whatever accommodation they may need whether it is for a disability or a language barrier. That is required by law and for those who have not seen it, Lisa has been putting some great resources in the chat that reference some of those guidances and laws as well. So they’re required to make that accommodation and as also pointed out, it’s effective accommodation. It’s not just simply I made something available to meet their needs. They need — it needs to work for that particular patient. So there’s the legal responsibility there on the provider. So just because you’re using TeleHealth, it doesn’t mean suddenly all your legal responsibilities go away. You still have the responsibilities to patients for a disability or a language barrier. You still have to abide by HIPAA and privacy. All those still apply. You just have to maybe take a different approach in order to meet your responsibility.
Now for the TeleHealth industry, they do not have legal responsibility but there’s a variety of reasons why they should make these accommodations, these options are available for providers to use because simply the providers have to have something. They need to use it. So it makes really great business sense if you’re the only ones that only come that have all these options. If rime a provider and I have patients who need some type of accommodation and there’s like one vendor out there who has all those accommodations that meet the needs that I have and nobody else does, I’m probably going to go to that vendor. So it makes really good business sense for the technology companies just to make those options available to develop those options. It’s also the right thing to do. I mean, people with a disability are the same as everybody else. They will have health care needs. So why aren’t you going to make those options available or develop them? I think part of the problem before the pandemic was that TeleHealth was such a niche area, so small. It wasn’t utilized so widely that there probably wasn’t that pressure to make accommodations or to develop them. So for example, I’m not saying all providers are doing, this but you may have had a provider who if they had a deaf patient and they have like a TeleHealth option, they probably said to the deaf patient, why don’t you come and see me in person. That way they can provide accommodations they may need as opposed to-dos it via TeleHealth. They never developed those protocols through their at-time health platform. There but then COVID-19 hits. Everybody needs else had care services at this point and everybody is turning towards TeleHealth. So you had that gap during the pandemic where those technology needs were not developed and maybe some of that training again that Dr. Mike and Suzy touched upon for the providers weren’t. There so they didn’t know how to accommodate how to work with those particular patients via TeleHealth even though now during the pandemic they needed to do that. I think the responsibility is on both legally and providers aren’t going to get out of it. That’s on them, but also the industry needs to make sure they have those options there available for the providers and for the parents to be able to seas — patients to be able to access.
>> MATT MYRICK: Wonderful, thank you, Mei. Before we go to questions and answers, I have a couple more questions I wanted to ask before we go ahead with the QA. I want to make sure we have enough time for Q&A. But this question is for Lisa. Lisa, with all respect due to TeleHealth, describe your role of the administration for community living, and from your perspective, what do you think are the biggest barriers to make TeleHealth fully accessible?
>> LISA BOTHWELL: Sure. Hi again. This is Lisa. So first of all, as I explained previously, I work for the association for community living, the ACL. Administration for community living which is under the ACA. So we want to disable people who are living within their communities and also more elderly adults and so our approach is one of the biggest activities is a grant. So the grantees of ACL for different services that are available on the community, from different organizations, non-profits and many of those you’re probably familiar with that include center for independent living, assistive technologies, those programs typically you have a piece of loaner technology and get the assistive device. I remember quite a long time ago, I received a TTY from an assistive technologies program. I’m sure things are different now. Also, sharing advocacy center is a legal type of agency that is available in each state and territory. We also fund state — we fund aging who are working with older adults to get related services. There was one more I wanted to mention. Really there are quite a few. There was a list of organizations out there in the community that works with people with disabilities and elderly adults. I want to add that we also fund ERC. I believe Bobby mentioned Gallaudet RERC, which is rehabilitation engineering. The third I believe is the research center, the EREC. Their quite a few different grants out there.
And I want to specifically talk about the assistive technology program. Many of the questions we received about assistive technology relate to TeleHealth and I noticed that in this conference. Assistive technology programs are doing some really wonderful things. They’re using some funding specifically FCC CARES to fund for TeleHealth. And that’s used to provide hotspots or other types of TeleHealth equipment for the community. So I really want to encourage everyone to reach out to your state assistive technology program or your territories assistive technology program and I can include a link where you can find how to contact those programs, how to reach out to them. They can — you can be involved in demonstrations of equipment. They have loan programs where you can borrow assistive technology devices. It would depend on their policies. So really I want you to interact with AT device centers and just be aware of the information of everything that’s out there that’s to support TeleHealth and you can contact the center for independent living. It will be for people with disabilities and your area agents — excuse me. Agency for the elderly and you can get rainfall for the type of organization and I strongly recommend that you reach out to one of those if you have any questions about TeleHealth. I would encourage you to do that.
I want to circle back an answer that one of our panelists talked about working with states. I want to encourage people to work with different state organizations or agencies that are receiving some of our grant funding. They’re very engaged with the community. Also really (inaudible) the community has services they need to live independently and on the state level or for assisted living. So the state level, reach out to them and better understand how they make policy decisions. My role in the ACL I work in a very small office, which is the office of policy analysis and development, the OPAD. And we review many policies. This is from payment rules and regulations to civil rights types of documentation. Notice rules making, website design. We do — we look for the community and see what they need, what that community’s demographics are, what is the research showing that is out there. ACL is the research that ACL is funding. What are the community saying and that feedback we want to apply and modify our approach and the policies we’re developing. Not all of our changes are accepted, just to be clear. So that’s generally my role in the ACL. And like Suzy mentioned previously, I worked — she let an interagency federal partnership with accessible TeleHealth. That group there was many different federal agencies including the FCC was involved. HRSA, which is the health resources and services administration, HHS office of civil rights, the Department of Justice. So we really all came together. We talked about — okay. What can we do to move forward with the — with making TeleHealth accessible? What is the next step? And those discussions took place. We also wanted to think about our goals and our vision for educating TeleHealth providers themselves. So I just commented that she was working with HRSA and we have engaged with HRSA, health resource services administration. So we’re trying to do a lot more outreach to the providers themselves and remind them of their civil rights obligations.
One thing we did, this was last year, the Department of Justice and Dr. Mike as well. He was part of a group that was presenting at a TeleHealth conference last year. It was wonderful. This was a coordinated effort from quite a few federal agencies and also TeleHealth resource centers. It was a great project and we try to find different ways to continue dos those sorts of things and I want to add the last part just before I wrap up. Last week was very exciting. ACL joined with the office of civil rights, we presented to the interagency policy committee for the executive office president in the White House. So we presented with the community is saying about accessible TeleHealth. We have seen that the community needs accessible TeleHealth and that brings me to my last point. The biggest point that we discussed at the White House presentation was for standardized TeleHealth accessibility. And this was what we — this was from feedback from the community. And with that, I will close and turn the floor over to someone else.
>> MATT MYRICK: Okay. Thank you, Lisa. Suzy, did you have a question, and then you wanted to turn it over to Mike.
>> SUZY ROSEN SINGLETON: I wanted to add to what Lisa said. This is Suzy speaking. Lisa explained how the states have been working on accessible TeleHealth services and platforms to encourage that consumers can also advocate for that as well. When to drive home the deaf and hard of hearing consumer network, consumer action network, DHHCANDHHKAN has a white pain or TeleHealth accessible. So it is really important that consumers are aware of that. That’s a possible tool for your use in your states providing that to or providing that as a patient, as an individual or as an organization, commissions for deaf and hard of hearing, any number of organizations that you should go to speak with including your local providers to what you need. There’s already a tool that’s written and out. There so you don’t have to reinvent the wheel. That’s one quick thing I wanted to add in terms of row source. I don’t have a link. I will look for that while others are talking.
>> MATT MYRICK: Okay. Thank you, Suzy. Mike, did you want to add anything else? Go ahead, Mike.
>> MIKE McKEE: Yes. Yeah. This is Mike here. So another important for all of you to recognize and know is right now there are different options open and available. So we have TeleHealth. So there are some people that mentioned audio. So those phone visits also the video visits and sometimes they can be both audio and video. But right now the reimbursements meaning if you’re submitting something through insurance and then you get paid from a doctor for the services, right now that’s open. It can be phone and video regardless both are the same. We suspect that later on, that will change which means they see phone consultations as not equivalent. So they will reimburse less. It is important for all of you to fight this. What I suspect is that a deaf person may feel — may have a conflict, may have a struggle, and may use VRS and they count that as a phone visit, but there’s a risk for reimbursement there. So we need to fight to make sure it is counted the same. We need to continue to have different available options too. And sometimes with the video, there are not any accessibilities. So we understand that. Sometimes it requires being in person, which is fine, but we need to make sure there’s equivalent accessibility. If someone is sick, they have to come in. We want to make sure that we are able to support them to have an in-person visit too. And hopefully, CMS will continue to reimburse that equivalent rate for in-person visits so that way people have different options. And again, the point is making things accessible and making sure we consider all of those things too. Mei, did you raise your hand?
>> MEI KWONG: Yes. Thank you. Since we’re talking about what actions you might be able to take on the state level, I want to say to just tag on to what Dr. Mike was saying. It is very important to act now base decisions are being made now. And there is an increased interest in making sure there report disparities between communities, but I think for a lot of policymakers, they’re thinking more of people of different ethnicities or incomes and they may not be thinking of people with disabilities as a community. Not all policymakers, but I do wonder in conversations I heard. It goes beyond just race and income. There are like other people such as age, disabilities, et cetera, that may need accommodations as well. So Mike brought up a very important point about the audio-only. That is one thing that we’re not quite certain is going to stick around. And if it is something that is needed by the deaf and hard of hearing community that accesses or another community with a disability, they need to make their voices heard. How do you go about doing that? The first step is to talk to state representatives, but there are a lot of states groups that are coming together to push TeleHealth policy. I’m based in California. There’s a California TeleHealth policy coalition that CCHP convenience. There are similar things going on for the states, but if you’re quite sure where to look, I suggest you reach out to the TeleHealth resource centers. There are 14 of them. CCHP is the one on policy for — it’s a national resource center. There’s a national resource center on technology, but there are 12 regional resource centers and they cover specific states and have more of an ear to the ground of what is going on in their state. So they’ll be able to direct you to folks who are very invested in pushing these TeleHealth policies. Community health centers in particular have been very active during this time in insuring at the time health policies stick around. So you might want to reach out to the TeleHealth center or their association that represents them in their state. So they’re usually called primary care associations. So they might say something like the California primary care association. That’s how you can kind of find them.
>> MATT MYRICK: Wonderful. This is all great information. And I believe I have the very last question for the audience. This is the question. I understand your organization is are if you the centers of connected health policy is one of the 14 to help resource centers. Please describe these centers and what your organization rule is and what some of the resources is and tools that the TeleHealth resource centers are?
>> MEI KWONG: Yeah. Besides what I just mentioned that connection to local groups that might be working on TeleHealth and TeleHealth policies, so what the resource centers do, the 12 regional ones they provide program operational level assistance to providers who are interested in starting a TeleHealth program. So when the pandemic hit, they were helping a lot of providers get TeleHealth up and running. And one of the things that we have started to do or started to do a lot more is to make sure that they cover all the range of needs, the education tools cover all the range of needs they have. Language accommodations, but accommodations for people with disabilities, accommodations for people in urban and rural areas because even before the pandemic, most of our charge from our federal funding was kind of more concentrated in like the rural regions for like community health centers or hospitals and notes in the broad range of folks that we have to help now who are at hospitals or individuals and people in urban areas as well. So some of the things that we developed have been fact sheets and some fact sheets that are aimed at the consumer, the patient themselves just informing them on what TeleHealth is and a lot of this is communications and letting you know what are the questions you should be asking them. If you need accommodation and you’re doing like TeleHealth, does it ask for that accommodation and the providers should be able to provide it with you? When we talk to providers, you might have a patient who may need accommodation. What are you going to do? Look for technology platforms that have accommodations. We try to set up a patient/consumer and the providers up to try to address any needs that come through the door to make sure that they’re aware of all their obligations. Now, that being said, none of us are — technically I’m a lawyer. I don’t give legal advice. We’re girlfriends them information to — we’re giving them information to let them know these are things they will run into or questions they will run into. I really do encourage you if you have questions regarding tell health to reach out to the centers for your answers. We all are in communication with each other. So if they have a question they can’t answer, they usually send it together and one of us usually finds the answer for somebody if we don’t know it already.
>> MATT MYRICK: Awesome. Thank you, Mei. Really good information. And this next question is for Suzy. Suzy, you can throw away your patient hat and put on your FCC hat. It’s the last question I wanted to ask you. What is the FCC doing to support the providers and during the pandemic and future progresses?
>> SUZY ROSEN SINGLETON: Hi. This is Suzy and I see we only have 3 minutes remaining. I will make this one minute so everyone else can have a chance to say something. The Federal Communications Commission has taken a lot of time to distribute funds — it is really about things like broadband and equipment and so forth. So we distributed about 650 million dollars thanks to congress of Congressional appropriation and our language for those providers who are receiving funds does emphasize that you must comply with the ADA and other applicable laws. They call out the ADA as a reminder because the FCC has limited jurisdiction. So we appreciate the HHS leadership in the accessibility space for TeleHealth. With that, I want to thank everybody for all your time for being here with us. Hopefully, we’ll be able to continue to see this space evolve rapidly, of course. Everyone has said it is very timely. It is very important and it’s never too soon to communicate with your local providers to try to affect some change and accessibility. Like 911, you don’t want to have to worry about it until you have to make a call, but you want to make sure it is available and accessible at that time and the same goes for TeleHealth for things like overnights or urgent calls you’re trying to make. Yeah, Mike?
>> MIKE McKEE: I want to add one more comment. I want to encourage people to start thinking about the time that it’s going to take for innovation, the time for change, and really the first time in history there’s been so many doctors and staff and everybody are just trying to figure out what is going on and we have to rethink all of this. We have to take this opportunity and this time just to encourage all of you to think about innovation and accessibility and diversity out. There make sure your voices are heard and that you’re seeing. It’s most important to take this opportunity. This is the biggest change I have seen thus far. We have to make sure that we keep moving forward, we push everyone and encourage everyone to go so we can have a seat at the table too so the future will be good for us.
>> MATT MYRICK: Okay. Wow. I wish we — I wish we were not out of time. I would love to hear from the panel — I mean, from the audience. And this has been a very great discussion. It is — we need to talk about these things. That’s why TDI is here and, you know, again, I want to thank the panel for being here and talking on the sensitive topic of at the time health and the good stuff and the bad stuff. We need to address that. All in one. So again, I want to thank you for all your time and your participation, and your efforts. So again, thank you.
>> Thank you.
>> Bye-bye.
>> Bye.
Apple Accessibility for the DHH
Sarah Herrlinger, Apple
Transcript
Accessibility & Where We Are
Jenny Lay Flurrie, Microsoft
Transcript
Bringing the World Close Together
Sam Sepah, Google
Transcript
>> MATT MYRICK: Please make sure to identify your personal appearance, what you look like, if you have brown hair, glasses, what kind of shirt you’re wearing. Mike maybe you can say you have your doctor’s uniform on. Mike saying yeah. Show you have worn my white coat.
>> MATT MYRICK: We’ll go ahead and get started. I would like to thank everyone for joining this session, this session regarding TeleHealth. My name is Matt Myrick and I wanted to —
>> We need the interpreter. Matt needs an interpreter.
>> MATT MYRICK: Thank you all for joining this conference. This is a TeleHealth session. We have four individuals on the panel, myself. I am Matt Myrick with the TDII member at large on. This panel, let me self-identify for those that are — we have participants on the audience that are deaf-blind. I have brown hair, wear glasses. I’m wearing a blue polo shirt. I have a TDI logo on my left chest. Next I would like to hand it off to Lisa.
>> LISA BOTHWELL: All right. Hello, everyone. I am Lisa Bothwell. I am a Caucasian woman in my 30s. I have short hair. I’m wearing a black shirt with a black jacket, business casual. I work as a manager for Community Life, ACL. And the goal of initiative ACL is support people with disability and elderly adults living in their homes and in their communities. So my area of expertise is policy-related review. So we review different policies and do development within those fields. So with that, I will turn it over to the next person.
>> MEI KWONG: Hello. I’m Mei Kwong with the policy.
>> Sorry. We need the interpreter up.
>> MEI KWONG: I’m Chinese. I have long dark hair. I have on a blue dress with leaves on it and a pair of hoop earrings and I wear glasses. I think I’m the only woman on the panel wearing glasses. It looks at TeleHealth policy on the federal and the state level.
>> MATT MYRICK: Suzy?
>> SUZY ROSEN SINGLETON: Hi, everyone. My name is Suzy Rosen Singleton. How to describe myself. I’m wearing a black jacket with a neck lease. I’ve got my hair up. It’s blond. I’m in front of a blue background that’s very plain and I work at the Federal Communications Commission in the consumer and governmental affairs bureau in the disability rights office and I am the chief of the disability rights office focusing on video programming, modern communications and emergency communications access for all three of those areas and we collaborate with other bureaus in our agency on those. I am here today also to share some of my personal experiences in a wonderful success story of myself and TeleHealth access. So I am hoping you share that as well. Coordinate very closely with Lisa and we worked together in the federal I. agency accessible TeleHealth working group to make forward progress and insuring that DOJ HHS and all those agencies partner together to protect your rights to have accessible TeleHealth. Now, I will pass it back to Mike.
>> MIKE McKEE: I agree. Thank you so much. My name is Mike McKee. So I will identify my. I’m a hiss panic and Caucasian, I have brown Auburn hair. Currently I’m in my home office. I have a blue and white striped shirt. And I’ll explain a little bit about my role. I’m a deaf family medical physician. I work at University of Michigan in their department of family medicine. I work as a physician there and I’ll talk about my experiences interacting with in-person and now moving into a more virtual sphere.
So I look into client — we do investigations into some of our clients and patients. So I’m looking forward to having that discussion with this panel.
>> MATT MYRICK: Okay. All right. Thank you. Okay. So let’s wait for the interpreters to come back on. Thank you. And so let’s go ahead and dive right into the panel discussion. I know this has been a hot topic, you know, with the pandemic that hit us last year and there are lots of questions regarding to the TeleHealth issues, et cetera. So I would like to start with the very first question for Mike. Can you provide a brief description in TeleHealth and what exactly it is and how it is being used?
>> MEI KWONG: TeleHealth really just means using technology to provide health care services when the patient and the provider who is providing the service are in the same location. So they use technology to bridge that distance. And the types of services that it’s been used for vary from specialty to specialty. You can have some services. Some specialties which they can use TeleHealth a lot for a lot of their services and then you have other specialties where maybe they have a narrower range of using the technology to provide those services. A lot of times it really is left to the provider’s judgment on like when to use it in consultation with the patient because you can have situations where you have two patients being treated for the same thing but maybe technology isn’t the best way to provide a service for one patient as it is for another. So even though I’m a at the time health proponent and advocate, even I say it is not appropriate for every single situation, you go it should be available for anyone who may. To have those services provided via technology.
>> MATT MYRICK: Thank you. Thank you, Mei. This next question is for Lisa. And can you expand on the requirements for accessibility and for TeleHealth providers?
>> LISA BOTHWELL: Sure. This is Lisa speaking. So I’ll focus now on three points of — three legal miles, but they might have other applications, but I will focus on three things. The ADA the Americans disability acts. Section four the rehabilitation act and section 1557 of patient protection and affordability care act, which is the ACA. So many of you are already familiar with the Americans with disabilities act. There are two types of — types state and local governments which are public entities. And then tier 3 which would apply to places of business accommodations, professionals, office, else had care provider offices, hospitals, social service centers, establishments, insurance offices, pharmacies, so forth. Section 504 applies to entities receiving federal funding assistance and also to executive agencies, federal agencies. Section 1557 applies to entities receiving federal funding, assistance, and entities which are covered by the ACA title 1. And that would typically be through the state based insurance market place. So those three are the legal aspects that we’ll be talking about. Using the things that the DOJ mentioned last year and the context of health care non-discrimination based on disability meaning equal access to available healthcare services regardless if those services are provided in person or through a virtual platform. So an example of that would be TeleHealth or telemedicine. So TeleHealth we have TH and TeleHealth we abbreviate TM — and telemedicine would be TM. That would have accessible information and communication technologies. Effective communication would mean communication must be effective for people with disabilities on an equal level as it’s effectiveness for people with disabilities.
An interesting thing about effective communication which includes accessible information and accessible technologies and the definition of auxillary aids and services. Let me back up just a second. Health care providers, they are responsible for providing auxillary aids and services which we call AAS. Typically you might be familiar with interpreters captioning services and a variety of alternate formats and so forth. So I want to provide two resources. I’m trying to go ahead and wrap up. I want to provide two resources that have been released about — they might be related to this audience, pertinent to this audience. So it’s under the ACA is under HHS. Under HHS is the office for civil rights, the OCR. And the OCR has released two bulletins. One of those is regarding civil rights requirements during the pandemic during COVID. And in the past, talk about accessible information and technology and now include those in the chat, if anyone is interested in reading out more about those two documents. And I want to take this moment to provide some information about how to file a complaint. There are two places that you can file a complaint. One is with HHS office of civil rights, the OCR. Or through the Department of Justice, DOJ. The DOJ’s disability rights center. So I’ll also include those two links for more information and put that in the chatbox. With that being said, I’ll turn it back to Matt.
>> MATT MYRICK: Okay. Awesome. Thank you, Lisa. Can you elaborate — the question is back for Lisa. Um, interpreter? Yeah. So again, Lisa, can you clarify what effective communication is and provide some example around effective communication?
>> LISA BOTHWELL: So as I said before, effective communication is an entity if they’re under one of those three tiers or another civil rights law and it has to insure that communication with people with disabilities is as effective as it is with others without disabilities. So effective communication can mean that the care provider is responsible to provide those auxillary aids and services which include interpreters, qualified interpreters. It can include captioning, C.A.R.T., which is realtime captioning. It can include alternate formats. For example, braille, other formats. And I think that’s —
>> Interpreter: May I add to that?
>> MATT MYRICK: Yeah. Suzy?
>> SUZY ROSEN SINGLETON: Hi. This is Suzy speaking. Once upon a time, I was a litigator in California and there was a case that I litigated against Etna hospital that would refuse to provide interpreters for a spouse of a patient who was on life support and comatose and they needed to disconnect life support, but they had said that because the spouse was not the patient, they refused to provide accommodations there and I was at the California center law center at the time, but the ADA had just passed. This was in ’91, but there were not regulations yet promulgated on that. So we ended up going to the 9th circuit court to discuss what effective communication meant in the absence of regulations and the communication was published then to mean requiring interpreters for complex communication. So it really does depend on the communication itself whether it’s wrote or complex, whether it’s significant or minor, there’s a number of different fact ears that go into that determination and writing back and forth could be considered in some context communication, but it depends on the details of that case and situation itself. It is a very complicated concept. So it is very individualized and fact based depending on the requirement as well — the environment as well. Basically since the 1990s to today, the interpretation has been that effective communication is requires — that the individual with the disability feels that it is effective. For example, if Euro a TeleHealth — if you’re on a TeleHealth appointment and you would prefer an interpreter, then you should have that type of accommodations. Effective communication is very idiosyncratic and independent on the individual’s needs. There is a whole sweet of options and you have to ask the patient and the patient’s member. That can had been clarify what effective communication with the lawyers working in a lot of gray areas here are trying to nail it down.
>> MATT MYRICK: Yeah. Thank you, Suzy. Not one size fits all categories. Depending on communication modalities. That’s very important.
Okay. So moving on. This is the question for Dr. Mike and why are TeleHealth platforms not accessible enough? And what feature would make TeleHealth platform truly accessible for all? Can you share some insight on that, Mike? Dr. Mike?
>> MIKE McKEE: Sure. Thank you so much for the question. So the platform there is a variety of platforms out there. For example, we have video, zoo, a variety of different platforms available. Some of those report accessible. It is for hospitals, doctors and institution or systems when they contract maybe they are looking for the cheapest option, not the most advanced. So there are — when we have those platforms available, there are some limitations. For Zoom, you can have live captioning as a feature am.
>> of automated live captions or if you would like to have a 3-way meeting. We have video that would be a simple call to a doctor on video. I can see the doctor and the doctor can see me or if you would like to add a third 3-way meeting, you would have a third video interpreter involved. And then they would have to be contracting in advance to have that available. It’s not a question of the technology is or isn’t available. It’s that is the institution, the staff, the doctors, the health care system, do they know about or they knowledgeable about or can those technologies deliver effective communication? What happens is sometimes you’ll have technology available and — excuse me. It’s not available when it is widespread in the world, but it is not available in certain sectors. We want to advocate and fight for having that accessible available for everyone consistent across all populations. So we would like to have a one mandated experience for everyone. There are complexities that are involved with that. For C.A.R.T., sometimes it’s not Zoom live or on Zoom live, there are some errors when the transcriptions, you can ask for C.A.R.T. instead. It includes a link, some staff or physicians report really knowledgeable about going to a link to connect to the C.A.R.T. services. So it’s important we educate them and make that transition easier and also more secure. I would recommend that we think about the questions to ask and one of those questions is not really the cost. First we should think about the availability if it is accessible for deaf and hard of hearing and all populations and the equity, the equal access. For example, for hearing people other maybe the ease of use is really deaf populations don’t have the same experience. We’re really fighting for that.
>> MATT MYRICK: Okay. Thank you. I’ll wait for the interpreter. Thank you, Dr. Mike. Suzy, I know that you had some experience in this area. Can you describe what type of inaccessible TeleHealth looks like to you?
>> SUZY ROSEN SINGLETON: Sure. Thanks, Matt. This is Suzy speaking. And I’ll pick up on what Dr. Mike was just saying as well. You’re right. There are many different possible ways to be communicating with your health care provider, but we’re focusing now only on video platforms and not audio only platforms for two-way video platforms. In March of 2020, I got back home from Sun Valley, Idaho which is one of the kind of hotspots for skiers. Neigh had an outbreak of COVID and when I got home, I was experiencing some symptoms with a high fever and sore throat and was very, very concerned and didn’t want to go to a hospital and those other people, I didn’t want to be exposed as I think at that time in March, many of us were in a state of panic because of the unknowns of the pandemic. We didn’t really know the results at that time. So I wanted to stay home to the extent possible. So I have an app for TeleHealth portal that I attempted to use for an unscheduled appointment, not for like a regularly scheduled appointment. It was at 8 o’clock at night and my fever was climbing. I wanted to see what I could do to treat myself. I went into my local app, but there were no questions there for how to request an interpreter. I decided to go into this waiting room and there were many doctors also that were attending there. And I was still on the lookout where I could request an interpreter and eventually my call was taken by the first doctor and then I said I told him I was deaf and I’m trying to get an interpreter and there was no way to pull that in particularly for an unschedule appointment to pull in an interpreter to that. So I struggled to communicate. I think that they told me to take 8 Advils every 2 or 3 hours. I did that regardless of whether that was exactly right, but I was still very concerned about my understanding of this doctor’s instructions till the next day, of course. I was still ill and at home. My fever went down a bit. I ended up going to the hospital after all. I tested negative. It was just the flu, which was both good and bad, you know. Of course, I was still then able to catch COVID, but regardless, I reached out to my local TeleHealth director, the person who runs that system that uses the app and we then worked together to try to develop a third party plug in for their platform. It’s a really large health care network in D.C. So they luckily had the means for doing that. So basically they developed a platform that has a front end with instructions on it that asks if you need an interpreter, instructs you where to click and if there are any other accommodations you need, you should click elsewhere. So, um, if you do need an interpreter, then you would click and be directed to a new video window. And the interpreter would arrive then within 1 to 2 minutes, very, very rapidly, they established a contract with interpreters so that whole platform would display the doctor, the interpreter and then myself as well as a chatbox along the side. So we were able to have a discussion there and really communicate for unscheduled appointments. I was a tester for that. I didn’t have another emergency situation. I was helping them test T. it was very nice they were able to establish that. I asked them what they did and they said they developed an organic solution using blue stream and contracting with Amwell. So looking into it further, there are four large TeleHealth platform providers. There’s Amwell, Teledoc, Doc on demand and MD Live. So it is important to consider how to reach out to them because they need to have platforms that are — that are made with different options, right? The options for captioning or interpreter or even another video for a care giver, right? If you have a person with a cognitive disability that needs a care giver, which is why this type of disability on the side is something the provider themselves may not be able to handle because they are provided with a product that they just received and that product needs to be made accessible. There are things that need to go into that, but they need to coordinate. Another important aspect of that process is not just technical, it is training of personnel. So my local provider explained that they then required every single TeleHealth provider to go through a 30 to 45-minute initial training session and then another 20 minutes of testing. And on a monthly basis and quarterly basis, they do test and training. So that way all of their TeleHealth providers are versed on working with accommodations and pulling in interpreters for example on these unscheduled appointments. So that’s where — that was my experience I wanted to share that success story with you, but I wanted to mention a few other concerns and considerations. I don’t know whether that particular platform I had asked and made sure they should be aware of that whether those things are hearing compatible. That’s where the vendors need to have their own checklists so that is they have all of the different accommodations they could need to provide in mind. So, you know, certainly there was lots of work left to do and I’m looking forward to everyone continues to work together to make sure that happens.
>> MATT MYRICK: Okay. Wow. Thank you, Suzy. Again, I’m just (inaudible) this is a follow up question that I wanted to ask you. So you have experienced a transition from inaccessibility to TeleHealth. You can describe the service that you currently use and what it looks like and what do you find more helpful?
>> SUZY ROSEN SINGLETON: Yeah. I mean, that portal I believe meets my needs right now. I have seen a lot of different people using different portals and systems and really we shouldn’t be burdened with the obligation as individuals to tell people what we need or should be instructions available on kind of the front page when you’re going into the portal or the app. It gives you the opportunity to request accommodations upfront. You shouldn’t have to go searching for it not only that, but for language as well. I have seen some that have Spanish options as well. So the perfect portal, I think, would have a very well thought out, very well tested front end and well trained personnel and that’s something that we still haven’t quite seen. I’ve been asking around for other people’s experiences and it has varied widely. And that’s something that I think should be accomplished.
>> MATT MYRICK: Okay. Thank you, Suzy. The next follow-up question is for Dr. McKee. What have you heard from your hearing, non-signing colleagues regarding their experience trying to serve their deaf and hard of hearing deaf-blind facial using TeleHealth? And what would help to improve their experience?
>> MIKE McKEE: Thank you for that question. This is Dr. Mike. My hearing colleagues, one of the challenges they mentioned is that everything right now is new for everyone. There’s a lot of changes happens. A lot of people are learning on the fly trying to accommodate the needs of people through virtual health platforms. And that’s not to say that the — that we’re not prioritizing accessibility, but education and training needs to happen. Those things will happen as soon as possible, but now we have to open up different possibilities to connect with payrolls and I’ve use — with patients. Sometimes patients will be tech savvy. They can use the portal and I emphasize to them that they may have an access age platform, but maybe the patients aren’t aware of what to do, of how to use it. So it’s not accessible the technology part. It’s still a struggle for them. For my hearing colleagues, one thing I noticed — I’ll discourage you for this. So let’s say a deaf patient tries to connect or a physician tries to connect and realizes they’re a signer and then they want to connect through VRS. Unfortunately, VRS interpreter may not be medically certified. They’re not really a skilled medical qualified interpreter. We need to reach out to qualified medical interpreters who have education in that field and who insure that — we encourage — there’s more work to be done in that aspect, but some people do complain that it’s hard to do that VRS interpreter is ready. They can be contacted immediately, but I really try to explain to them they’re not medically certified. There’s also, you know, video is rich in data as a place — suppose we reach out to a patient and we can see their background and their home environment. We can see their presentation. Are they struggling? Are they having a shortness of breath? Through VRS, there is no information we can glean from what we’re seeing from the patient. That’s not equally accessible and that’s a risky approach too. There are risks. Explaining medical terminology to the interpreter and their interpretation may be incorrect. So sometimes it can be easy. So relay is easy to try to reach out to initially, but really they can be communication breakdowns when trying to use that. Speaking of — we have C.A.R.T. here available. So let’s say there’s a link — there’s a separate Zoom link for C.A.R.T. and I will ask to you go to that Zoom link. Sometimes that isn’t efficient or smooth. That’s somethings that needs to be worked on. Live is new because live captioning was (inaudible) with errors in the past. The hard of hearing community really wanted live captioning included. It was something they advocated for, but it’s — there’s limited options when trying to set up C.A.R.T. and live captioning. So there’s pros and cons to the approach. It’s a different world out there and we’re not blaming, but we want people to be creative and find different ways and strategies to make sure everyone feels comfortable with the technology. From here on out, this is going to be a few world. So we want clear training and make sure everyone — there’s a standard and so we — so like Suzy mentioned, there will be trainings required like 45 minute trainings think right now there are no trainings for doctors that I’m familiar with. Some people may be familiar with technologies and maybe more tech savvy as opposed to more seasons physicians. Unfortunately, that can impact the patient experience. And sometimes deaf and hard of hearing patients they know that COVID and the pandemic is stressful and they tend to accept what is given to them. I would encourage them to speak up and make sure equal access is on the same footing with the hearing community.
>> MATT MYRICK: Awesome. So this next question I have is for Mei. Mei, who should be sponge for insuring that TeleHealth services are accessible? Is it doctors or the technology company that developed the platforms or some combination of both?
>> MEI KWONG: I would say it’s both and part of it is legal — a legal answer and part of it is also that they simply should do it for a variety of reasons. The legal answer is that doctors and providers need to accommodate their patients for whatever the accommodation they may need whether it is for a disability or a language barrier. That is required by law and for those who have not seen it, Lisa has been putting some great resources in the chat that reference some of those guidances and laws as well. So they’re required to make that accommodation and as also pointed out, it’s effective accommodation. It’s not just simply I made something available to meet their needs. They need — it needs to work for that particular patient. So there’s the legal responsibility there on the provider. So just because you’re using TeleHealth, it doesn’t mean suddenly all your legal responsibilities go away. You still have the responsibilities to patients for a disability or a language barrier. You still have to abide by HIPAA and privacy. All those still apply. You just have to maybe take a different approach in order to meet your responsibility.
Now for the TeleHealth industry, they do not have a legal responsibility but there’s a variety of reasons why they should make these accommodations, these options available for providers to use because simply the providers have to have something. They need to use it. So it makes really great business sense if you’re the only ones that only come that has all these options. If rime a provider and I have patients who need some type of accommodation and there’s like one vendor out there who has like all those accommodations that meet the needs that I have and nobody else does, I’m probably going to go to that vendor. So it makes really good business sense for the technology companies just to make those options available to develop those options. It’s also the right thing to do. I mean, people with a disability are the same as everybody else. They will have health care needs. So why aren’t you going to make those options available or develop them? I think part of the problem before the pandemic was that TeleHealth was such a niche area, so small. It wasn’t utilized so widely that there probably wasn’t that pressure to make accommodations or to develop them. So for example, I’m not saying all providers are doing, this but you may have had a provider who if they had a deaf patient and they have like a TeleHealth option, they probably said to the deaf patient, why don’t you come and see me in person. That way they can provide accommodations they may need as opposed to dos it via TeleHealth. They never developed those protocols through their at the time health platform. There but then COVID-19 hits. Everybody needs else had care services at this pint and everybody is turning towards TeleHealth. So you had that gap during the pandemic where those technology needs were not developed and maybe some of that training again that Dr. Mike and Suzy touched upon for the providers weren’t. There so they didn’t know how to accommodate how to work with those particular patients via TeleHealth even though now during the pandemic they needed to do that. I think the responsibility is on both legally and providers aren’t going to get out of it. That’s on them, but also the industry needs to make sure they have those options there available for the providers and for the parents to be able to seas — patients to be able to access.
>> MATT MYRICK: Wonderful, thank you, Mei. Before we go to questions and answers, I have a couple more questions I wanted to ask before we go ahead with the QA. I want to make sure we have enough time for Q&A. But this question is for Lisa. Lisa, with all respect due to TeleHealth, describe your role of the administration for community living and from your perspective, what do you think are the biggest barriers to make TeleHealth fully accessible?
>> LISA BOTHWELL: Sure. Hi again. This is Lisa. So first of all, like I explained previously, I work for the association for community living, the ACL. Administration for community living which is under the ACA. So we want disable people who are living within their communities and also more elderly adults and so our approach is one of the biggest activities is grant. So the grantees of ACL for different services that are available on the community, from different organizations, non-profits and many of those you’re probably familiar with that include center for independent living, assistive technologies, those programs typically you have a piece of loaner technology and get assistive device. I remember quite a long time ago, I received a TTY from an assistive technologies program. I’m sure things are different now. Also, sharing advocacy center which is a legal type of agency which is available in each state and territory. We also fund state — we fund aging who are working with older adults to get related services. There was one more I wanted to mention. Really there’s quite a few. There was a list of organizations out there in the community that work with people with disabilities and elderly adults. I want to add that we also fund ERC. I believe Bobby mentioned Gallaudet RERC, which is it rehabilitation engineering. The third Rmy believe is research center, the EREC. Their quite a few different grants out there.
And I want to specifically talk about the assistive technology program. Many of the questions we received about assistive technology relate to TeleHealth and I noticed that in this conference. Assistive technology programs are doing some really wonderful things. They’re using some funding specifically FCC CARES funding for TeleHealth. And that’s used to provide hotspots or other types of TeleHealth equipment for the community. So I really want to encourage everyone to reach out to your state assistive technology program or your territories assistive technology program and I can include a link where you can find how to contact those programs, how to reach out to them. They can — you can be involved in demonstrations of equipment. They have loan programs where you can borrow assistive technology devices. It would depend on their policies. So really I want you to interact with AT device centers and just be aware of the information of everything that’s out there that’s to support TeleHealth and you can contact the center for independent living. It will be for people with disabilities and your area agents — excuse me. Agency for the elderly and you can get a rainfall for the type of organization and I strongly recommend that you reach out to one of those if you have any questions about TeleHealth. I would encourage you to do that.
I want to circle back an answer that one of our panelists talked about working with states. I want to encourage people to work with different state organizations or agencies that are receiving some of our grant funding. They’re very engaged with the community. Also really (inaudible) the community have services they need to live independently and on the state level or for assisted living. So the state level, reach out to them and better understand how they make policy decisions. My role in the ACL I work in a very small office, which is the office of policy analysis and development, the OPAD. And we review many policies. This is from payment rules and regulations to civil rights types of documentation. Notice rules making, website design. We do — we look for the community and see what they need, what that community’s demographics are, what is the research showing that is out there. ACL is the research that ACL is funding. What is the community saying and that feedback we want to apply and modify our approach and the policies we’re developing. Not all of our changes are accepted, just to be clear. So that’s generally my role in the ACL. And like Suzy mentioned previously, I worked — she let an interagency federal partnership with accessible TeleHealth. That group there was many different federal agencies including the FCC was involved. HRSA, which is the health resources and services administration, HHS office of civil rights, the Department of Justice. So we really all came together. We talked about — okay. What can we do to move forward with the — with making TeleHealth accessible? What is the next step? And those discussions took place. We also wanted to think about our goals and our vision for educating TeleHealth providers themselves. So I just commented that she was working with HRSA and we have engaged with HRSA, health resource services administration. So we’re trying to do a lot more outreach to the providers themselves and remind them of their civil rights obligations.
One thing we did, this was last year, the Department of Justice and Dr. Mike as well. He was part of a group that was presenting at a TeleHealth conference last year. It was wonderful. This was a coordinate the effort from quite a few federal agencies and also TeleHealth resource centers. It was a great project and we try to find different ways to continue dos those sorts of things and I want to add the last part just before I wrap up. Last week was very exciting. ACL joined with the office of civil rights, we presented to the interagency policy committee for the executive office president in the White House. So we presented with the community is saying about accessible TeleHealth. We have seen that the community needs from accessible TeleHealth and that brings me to my last point. The biggest point that we discussed at the White House presentation was for standardized TeleHealth accessibility. And this was what we — this was from feedback from the community. And with that, I will close and turn the floor over to someone else.
>> MATT MYRICK: Okay. Thank you, Lisa. Suzy, did you have a question and then you wanted to turn it over to Mike.
>> SUZY ROSEN SINGLETON: I wanted to add to what Lisa said. This is Suzy speaking. Lisa explained how the states have been working on accessible TeleHealth service and platforms to encourage that consumers can also advocate for that as well. When to drive home the deaf and hard of hearing consumer network, consumer action network, DHHCANDHHKAN has a white pain or TeleHealth accessible. So it is really important that consumers are aware of that. That’s a possible tool for your use in your states providing that to or providing that as a patient, as an individual or as an organization, commissions for deaf and hard of hearing, any number of organizations that you should go to speak with including your local providers to what you need. There’s already a tool that’s written and out. There so you don’t have to reinvent the wheel. That’s one quick thing I wanted to add in terms of row source. I don’t have a link. I will look for that while others are talking.
>> MATT MYRICK: Okay. Thank you, Suzy. Mike, did you want to add anything else? Go ahead, Mike.
>> MIKE McKEE: Yes. Yeah. This is Mike here. So another important for all of you to recognize and know is right now there are different options open and available. So we have TeleHealth. So there are some people that mentioned audio. So those phone visits also the video visits and sometimes they can be both audio and video. But right now the reimbursements meaning if you’re submitting something through insurance and then you get paid from a doctor’s for the services, right now that’s open. It can be phone and video regardless both are the same. We suspect that later on, that will change which means they see phone consultations as not as equivalent. So they will reimburse less. It is important for all of you to fight this. What I suspect is that a deaf person may feel — may have a conflicted, may have a struggle and may use VRS and they count that as a phone visit, but there’s a risk for reimbursement there. So we need to fight to make sure it is counted the same. We need to continue to have different available options too. And sometimes with the video, there is not any accessibilities. So we understand that. Sometimes it requires to be in person, which is fine, but we need to make sure there’s equivalent accessibility. If someone is sick, they have to come in. We want to make sure that we are able to support them to have an in-person visit too. And hopefully CMS will continue to reimburse that equivalent rate for in person visits so that way people have different options. And again, the point is making things accessibility and to make sure we consider all of those things too. Mei, did you raise your hand?
>> MEI KWONG: Yes. Thank you. Since we’re talking about what actions you might be able to take on the state level, I want to say to just tag on to what Dr. Mike was saying. It is very important to act now base decisions are being made now. And there is an increased interest in making sure there report disparities between communities, but I think for a lot of policy makers, they’re thinking more of people different ethnicities or income and they may not be thinking of people with disabilities as a community. Not all policymakers, but I do wonder in conversations I heard. It goes beyond just race and income. There’s like other people such as age, disabilities, et cetera, that may need accommodations as well. So Mike brought up a very important point about the audio only. That is one thing that we’re not quite certain is going to stick around. And if it is something that is needed by the deaf and hard of hearing community that access or another community with a disability, they need to make their voices heard. How do you go about doing that? The first stop is talk to state representative, but there are in a lot of states groups that are coming together to push TeleHealth policy. I’m based in the California. There’s a California TeleHealth policy coalition that CCHP convenience. There are similar things going on for the states, but if you’re quote sure where to look, I suggest you reach out to the TeleHealth resource centers. There are 14 of them. CCHP is the one on policy for — it’s a national resource center. There’s a national resource center on technology, but there are 12 regional resource centers and they cover specific states and have more of an ear to the ground of what is going on in their state. So they’ll be able to direct you to folks who are very invested in like pushing these TeleHealth policies. Community health centers in particular have been very active during this time in insuring at the time health policies stick around. So you might want to reach out to the TeleHealth center or their association that represents them in their state. So they’re usually called primary care associations. So they might say something like the California primary care association. That’s how you can kind of find them.
>> MATT MYRICK: Wonderful. This is all great information. And I believe I have the very last question for the audience. This is the question. I understand your organization is are if you the centers of connected health policy is one of the 14 to help resource centers. Please describe these centers and what your organization rule is and what some of the resources is and tools that the TeleHealth resource centers are?
>> MEI KWONG: Yeah. Besides what I just mentioned that connection to local groups that might be working on TeleHealth and TeleHealth policies, so what the resource centers do, the 12 regional ones they provide program operational level assistance to providers who are interested in starting a TeleHealth program. So when the pandemic hit, they were helping a lot of providers get TeleHealth up and running. And one of the things that we have started to do or started to do a lot more is to make sure that they cover all the range of needs, the education tools cover all the range of needs they have. Language accommodations, but accommodations for people with disabilities, accommodations for people in urban and rural areas because even before the pandemic, most of our charge from our federal funding was kind of more concentrated in like the rural regions for like community health centers or hospitals and notes inly the broad range of folks that we have to help now who are at hospitals or individuals and people in urban areas as well. So some of the things that we developed have been fact sheets and some fact sheets that are aimed at the consumer, the patient themselves just informing them on what TeleHealth is and a lot of this is communications and letting you know what are the questions you should be asking them. If you need an accommodation and you’re doing like TeleHealth, does it ask for that accommodation and the providers should be able to provide with you? When we talk to providers, you might have a patient who may need an accommodation. What are you going to do? Look for technology platforms that has accommodations. We try to set up a patient/consumer and the providers up to try to address any needs that come through the door to make sure that they’re aware of all their obligations. Now, that being said, none of us are — technically I’m a lawyer. I don’t give legal advice. We’re girlfriends them information to — we’re giving them information to let them know these are things they will run into or questions they will run into. I really do encourage you if you have questions regarding tell health to reach out to the centers for your answers. We all are in communications with each other. So if they have a question they can’t answer, they usually send it together and one of us usually finds the answer for somebody if we don’t know it already.
>> MATT MYRICK: Awesome. Thank you, Mei. Really good information. And this next question is for Suzy. Suzy, you can throw away your patient hat and put on your FCC hat. It’s the last question I wanted to ask you. What is the FCC doing to support the providers and during the pandemic and future progresses?
>> SUZY ROSEN SINGLETON: Hi. This is Suzy and I see we only have 3 minutes remaining. I will make this one minute so everyone else can have a chance to say something. The Federal Communications Commission has taken a lot of time to distribute funds — it is really about things like broadband and equipment and so forth. So we distributed about 650 million dollars thanks to congress of Congressional appropriation and our language for those providers who are receiving funds does emphasize that you must comply with the ADA and other applicable law. They call out the ADA as a reminder because the FCC has limited jurisdiction. So we appreciate the HHS leadership in the accessibility space for TeleHealth. With that, I want to thank everybody for all your time for being here with us. Hopefully we’ll be able to continue to see this space evolve rapidly, of course. Everyone has said it is very timely. It is very important and it’s never too soon to communicate with your local providers to try to affect some change and accessibility. Like 911, you don’t want to have to worry about it until you have to make a call, but you want to make sure it is available and accessible at that time and the same goes for TeleHealth for things like overnights or urgent calls you’re trying to make. Yeah, Mike?
>> MIKE McKEE: I want to add one more comment. I want to encourage people to start thinking about the time that it’s going to take for innovation, the time for change and really the first time in history there’s been so many doctors and staff and everybody are just trying to figure out what is going on and we have to rethink all of this. We have to take this opportunity and this time just to encourage all of you to think about innovation and accessibility and diversity out. There make sure your voices are heard and that you’re seeing. It’s most important to take this opportunity. This is the biggest change I have seen thus far. We have to make sure that we keep moving forward, we push everyone and encourage everyone to go so we can have a seat at the table too so the future will be good for us.
>> MATT MYRICK: Okay. Wow. I wish we — I wish we were not out of time. I would love to hear from the panel — I mean, from the audience. And this has been a very great discussion. It is — we need to talk about these things. That’s why TDI is here and, you know, again, I want to thank the panel for being here and talking on the sensitive topic of at the time health and the good stuff and the bad stuff. We need to address that. All in one. So again, I want to thank you for all your time and your participation and your efforts. So again, thank you.
>> Thank you.
>> Bye-bye.
>> Bye.
Customer Obsession for Customers with Disabilities
Peter Korn, Amazon
Transcript
I’m Peter Korn, Director of Accessibility for Amazon’s Devices and Services organization. I am a middle-aged white man, with short, curly brown hair, and a beard. I am sporting a fiberglass cast on my right arm, in some of the colors of the rainbow pride flag; from elbow to wrist they are: purple, blue, green, yellow, orange, red. And I’m wearing blue denim collared shirt.
It is an honor, and a pleasure, to be here, speaking with you today.
Today, I would like to share with you a little bit about Amazon’s philosophy and approach to accessibility, our focus on employees who are deaf or hard of hearing, and our work to make our products, services, and experiences not only accessible to our deaf and hard of hearing customers but also delightful for them to use.
Amazon was founded on four principles, one of which was “customer obsession.” Today, Amazon’s corporate culture is driven by 16 leadership principles, with “customer obsession” continuing to lead the list.
The subject of my remarks today is Customer Obsession for Customers with Disabilities.
TDI works to advance the information and communications interests of the 48 million Americans who are deaf, or hard of hearing. These Americans are among hundreds of millions of people worldwide who are deaf, or hard of hearing. We would like you to be our customers, which means we need to make products that are not only accessible to you but which are delightful for you to use. Fundamentally, we want to earn your business; to be worthy of your business.
To do that, we don’t want to make products for people with disabilities, we want to make products with people with disabilities for everyone.
Part of how we do that is by hiring great people with disabilities to help design, develop, test, support, market, and deliver our products to everyone. In addition to having employees who are deaf, hard of hearing, or have other disabilities, embedded in the product teams building products for everyone, our product teams also work with Amazon employees who are part of our affinity group AmazonPWD (Amazon People with Disabilities), who in turn consult on product design, research, and development for products and services across the company.
To build great, accessible products for everyone, we need to attract, and support, employees with disabilities. One of the people helping lead that work is my colleague Brendan Gramer, President of AmazonPWD. I’d like to invite you to meet Brendan.
<Play captioned video “Meet Brendan_1080p.mp4” – 1:00” long>
One of the things that Brendan did, is help create the ASL interpreter program in the US, staffed by full-time employees, to support our employees who speak ASL. I’d like to share a short video about how that program is being used by one of our designers.
<Play video “Amazons interpreters translate a designer’s vision_1080p” – 1:32” long>
You may notice the ASL fingerspelling logo at the end of that last video. Brendan created that logo, and it is approved for use anywhere the Amazon corporate logo can be used.
I mentioned above that we have employees with disabilities in roles across the company that help in all facets of not only creating our products and services but delivering them to our customers. We have deaf and hard of hearing customers working in our fulfillment centers across the country, and around the world, again supported by sign language interpreters. This next video is from a fulfillment center in India. Fulfillment centers in India together employ many hundreds of deaf employees.
<Play video: “Being Inclusive at Amazon India Fulfilment Centres_1080p” – 0:40” long>
Captions
We have been building accessibility features and assistive technologies into our products and services for many years, to make our products accessible for, and delightful too, customers with a range of disabilities. Today, I’d like to focus specifically on those we have built for customers who are deaf, and hard of hearing.
One of the first things we did, we built caption support into Prime Video, so customers can use captions when they watch movies and TV shows. We require that our film studio partners who provide content to Prime Video include captions where they have them; and in some cases, we may even create captions ourselves, so that our customers can enjoy movies and TV shows with them.
Prime Video Direct is our program for enabling independent filmmakers to submit movies directly to our Prime Video catalog for customers to enjoy. We pay these independent filmmakers based on how often their video is shown. As part of our standard terms of agreement for Prime Video Direct filmmakers, we require that all independent filmmakers in this program provide captions for all of the videos that they submit to us.
When we launched our streaming media player, Fire TV, it included a number of apps for streaming media from other services, like Netflix, and Showtime. A requirement for inclusion in that launch was that streaming media services had to support caption decoding and playback for their videos on Fire TV.
Fire Tablet, and braille
Our first device was the Kindle. We followed that up with the Fire tablet, and of course, brought Prime Video with caption support to our tablets. One of the next things we did, created our own screen reader from scratch – VoiceView – which enables customers who are blind to take full advantage of the Fire tablet. And not long after we introduced VoiceView, we added braille support, enabling customers to read books and browse the web via a connected braille display, rendering in both contracted, and uncontracted braille, as well as unified English braille, and computer braille. We support braille input in those same character encodings. This enables our deafblind customers to enjoy Kindle books and web browsing, as well as many other features on our fire tablets.
Alexa
When we launched Alexa almost seven years ago with the original Echo, we heard from customers with motor impairments that Alexa helped them gain independence—being able to use their voice to turn the lights or music on, for example, giving them access to a world that was very difficult for them to use otherwise. Similarly, we witnessed how Alexa and Echo became very important for customers who were blind or visually impaired where they could now use the simple voice interface, rather than a screen reader that translated and navigated the visual information of the graphical user interface into a spoken interface.
However, we also recognized that this was a product that wasn’t immediately welcoming of customers who are deaf, or hard of hearing, or who have a speech impairment. A few years later, we added a screen to Echo, with our product the Echo Show. And once we had a screen, we had what we needed to make Alexa more accessible to customers who are deaf or hard of hearing, or who have speech impairments.
This led to the debut of a line of access features that Amazon pioneered for spoken interfaces and ambient computing: Alexa Captions. Alexa Captions lets customers see captions for virtually all of Alexa’s responses on an Echo Show device. And those captions are rendered with the full CEA–708 caption display settings, enabling customers to choose their preferred font size and font color and background color, etc. for their captions. We then published a caption programming interface to allow third-party Alexa skills to add captions to their spoken or recorded audio.
The second Alexa accessibility feature we introduced was Tap to Alexa, which allows customers to interact with Alexa using touch instead of voice. This feature comes pre-configured with a collection of common phrases, tied to icons that display on the screen, which customers can tap via touch. Customers can then use the on-screen keyboard to create their own utterances, which they either send immediately to Alexa or associate with new icons for quick use later. And just like captions have gone mainstream, customers without speech or auditory impairments consistently tell us that they use Tap to Alexa, often along with Alexa Captions, as a handy way to interact with Alexa in quiet spaces—for example, if they want to use Alexa but are in a room with a sleeping family member.
While not built explicitly as accessibility features, I want to touch on three other features that can be quite valuable to customers who are hard of hearing.
First, with Preferred Speaking Rate, you can ask Alexa to adjust the speaking rate to your preference. Alexa can speak slower, which may make it easier to understand, as well as speak faster (which is something many of our blind and low vision customers prefer as they are familiar with using fast speaking rates) – just say, “Alexa, speak slower,” (or “Alexa, speak faster”) to activate this feature.
Next up is the ability to connect external speakers to many of our Echo devices – either through the 1/8th inch audio output jack on many Echo devices, or through paired Bluetooth speakers, or headphones. With this external audio output option, customers with hearing aids can connect a streamer to the audio output jack, or if they have a hearing aid accessory or headphones capable of using the Bluetooth A2DP audio protocol, they can have high fidelity audio to their connection Echo device. To initiate the Bluetooth connection, simply say, “Alexa, pair Bluetooth,” when your device is in pairing mode, or use the graphical interface on your Alexa app to select the device you want to pair with, and then enable pairing through that interface.
The final feature that may be of specific interest to customers who are hard of hearing is the Equalizer. With the Equalizer, customers can fine-tune the audio to suit their needs and personal preferences. You can adjust the relative volume levels of bass, treble, and mid-tones for your Echo devices through the Alexa app, or on Echo Show devices through device settings. Or, you can just say, “Alexa, turn up the treble.”
TDI’s focus is information and communication technology, so I especially want to talk about three additional Alexa accessibility features, which relate to Alexa Communications. Alexa Communications is a suite of features allowing customers to use Alexa-enabled devices to connect with friends and family.
With Alexa, you can make hands-free voice or video calls to anyone who has a supported Alexa-enabled device or the Alexa app. On Alexa-enabled devices, you can also use services like Zoom, chime, and Skype.
Furthermore, you can also use Drop-In, which is an optional feature, which lets you connect instantly to supported Alexa-enabled devices, similar to an intercom. For example, if your friend, Steve, has given you permission, you can say, “Alexa, drop in on Steve’s kitchen Echo Show,” and Alexa will immediately open up an audio connection to Steve’s specific device so you can check in on him.
Additionally, there are also Announcements, where you can broadcast a voice message on multiple Alexa-enabled devices in the same household. For example, perhaps you can ask your family, located across from your home, to come down to dinner by sending an announcement to the Echo devices in your home.
On Echo Show, all of these communications options that I just mentioned above can be initiated simply through the graphical user interface (you don’t need to use Tap to Alexa to create the utterances). Just go to the accessibility settings on your Echo Show, and enable Calling and Messaging without Speech.
In addition to Alexa Captions, which I mentioned earlier, we also wanted to give customers the option of having Alexa caption what is being said during calls. Call Captioning enables customers to see captions for Alexa calls in near real-time. Just go to the accessibility settings on your Echo Show, and enable Call Captioning.
The final Alexa accessibility feature I want to highlight today is Real-Time Text. With Real-Time Text, Alexa adds a live, real-time chat feed during Alexa calls and Drop-Ins made from Echo Show devices.
To use it, turn it on in the accessibility settings, and then you can explicitly make a real-time text call with “Alexa, call Steve RTT.” You can also swipe down from the top of the Echo Show screen, tap Settings, scroll down to select accessibility, and then turn the RTT feature on.
When RTT is on, a keyboard appears on the screen, enabling customers to type text which appears in real-time on both parties’ screens. And you can pair a Bluetooth keyboard with your Echo Show, providing a nice way to have real-time text calls instead of typing on the Echo Show touch screen.
This brief video shows how this feature works:
<Play captioned video: “RTT on Echo Show” – 0:46” long (there is also an AD variant)>
As seen in the video, thanks to RTT working alongside the video and audio connection, an Echo Show becomes a “total conversation” device. Both ends of the conversation can alternately use sign language, speech, and text, in real-time, switching between, or using all three, as they like.
I’d like to close my remarks with some thoughts about affordability.
It is not enough to have products and services that are accessible, and delightful to use if those products and services aren’t affordable. At Amazon, we focus substantial time and energy on finding ways to reduce the cost of our devices while at the same time improving how they feel in your hand, and their performance year-over-year. It’s for that reason that we are able to deliver the 7-inch Fire tablet with a custom-built screen reader and some of the most sophisticated braille support that together provides access to over 12 million screen reader-supported Kindle books, and hundreds of thousands of movies and TV shows, virtually all of which are captioned, all with hands-free Alexa, for less than $50. It’s for that reason that we are able to deliver the Fire TV Stick Lite high definition streaming media player with Dolby support, tens of thousands of channels, the VoiceView screen reader, our built-in magnifier, and our innovative text banner feature for customers with a narrow field focus of view, for less than $30. And it is why we are able to offer the Echo Show 5, with all of the Alexa accessibility features, for less than $85. You can get a helpful total conversation device, with the ability to call any phone number in North America for free, as well as other customers’ Echo devices for free, that supports call captions, and real-time text, and Bluetooth pairing, for less than a nice dinner out for you and a few friends.
And that is what we mean by Customer Obsession for Customers with Disabilities.
Thank you.
Facebook’s Approach to Accessibility
Monica Desai, Facebook
Transcript
Hello and Thank you to TDI’s Chief Executive Officer, Mr. Kaika, for the opportunity to speak at this year’s conference. And thank you to TDI which has been a great partner and organization to work with for Facebook.
My name is Monica Desai and I am the head of the global Connectivity and Access Policy team at Facebook.
My team covers issues involving accessibility and other issues such as online messaging, online video, infrastructure, net neutrality, and spectrum. Prior to joining Facebook, I spent over a decade in senior positions at the Federal Communications Commission. It was while I was at the FCC that I became acquainted with TDI and the excellent work that you do. I served as Chief of the Consumer and Governmental Affairs Bureau at the FCC, which develops all policies and rules in connection with accessibility issues, and also served as Chief of the Media Bureau, which has oversight over broadcasters and cable companies and oversees captioning policies.
After becoming Chief of the Consumer Bureau, I gave my very first keynote at the TDI conference. That was in July of 2005, on the 15 year anniversary of the Americans with Disabilities Act. In connection with that appearance, I was excited to announce the adoption of rules that TDI had been advocating for, related to Video Relay Service improved speed of answer and expanded hours of service, and also new rules allowing compensation for Spanish language translation, and the launch of a rulemaking proceeding to consider higher-quality closed captioning. That same day we also announced new captioned telephone service rules that the Hearing Loss Association of America had been advocating for.
Now, 16 years later, I’m thrilled and honored to be invited to address this audience again – this time representing Facebook – a company that I found that is constantly working to improve the accessibility of our products and also working to improve diversity and inclusion in our workforce.
Our mission at Facebook is to give people the power to build community and bring the world closer together — and that includes everyone. Making our products accessible to people with disabilities is critical to getting our mission right. We take very seriously our commitment to accessibility, and we’ve built a variety of tools to enhance access to our service. We are working also to improve equal access and inclusion beyond Facebook as well.
To put the importance of this work into perspective, I’d like to highlight a few statistics around the number of persons with disabilities globally. According to the World Health Organization, more than 39 million people across the world are blind. And more than 285 million people have visual impairments. The World Federation of the Deaf reports that more than 70 million people globally are deaf. And the International Federation of Hard of Hearing People notes that more than 300 million globally are hard of hearing. Assistive technologies and tools – including screen readers, zooming software, automatic alt text, and video captioning – can help people with hearing loss and with vision loss connect with their friends and family while using Facebook.
I’ll touch first on Facebook’s approach to accessibility. We have a dedicated, centralized Accessibility Team to define, develop and distribute user accessibility requirements, best practices, technologies, and other support to advance accessibility work throughout the company. At the same time, accessibility is a horizontal function within the company. What does this mean? This means that accessibility is embedded into the different departments that touch the product lifecycle – including design, research, and engineering, as well as policy and legal – resulting in an important cross-functional effort to promote accessibility in our products.
I’ll highlight a few of the milestones for Facebook in our work to build a more accessible platform. First, let’s talk about captioning. Facebook several years ago launched a tool to enable users to manually add captions for videos, and to customize the way those captions display to the user. We also enable Facebook Live streamers to manually add captions, either by using a closed caption inserter tool or by working with a vendor to add real-time closed captions into their Facebook Live broadcasts. But not everyone takes the time to add captions and our goal is for every video on Facebook to have them.
So, we committed to making large investments in and leveraging artificial intelligence or “AI” technologies to enable automated captioning for video. A challenge associated with captions is that they are time-consuming and can be costly, especially for consumers and small businesses. Automated captioning makes it simple and easy for creators to add captions.
And this feature is easy to use – captions are generated with the press of a button and can then be reviewed by content creators and edited so they have full control. Automated captioning is available for Facebook ads and Pages, Facebook Live, Workplace Live, and Instagram TV to support the availability of captioning, together with manual captioning tools. The speed, scale, and quality of this AI-powered technology were only possible thanks to advances Facebook AI has made in automated speech recognition over the past few years.
We’re also improving the accessibility of photos at scale on our platform. Facebook has deployed AI-driven automatic photo-description technology that describes objects in photos to people with vision loss – this is called automatic alt text or “AAT”. Automatic alt text uses object recognition to generate a description of photos to enable people using screen readers to hear a list of items that photos may contain. To provide some context on the challenge that these tools solve. Every day, our users share billions of photos. Through research and engagement with the blind and visually impaired community, we found that screen reader users want to engage with and share photos and also that they desire more information about a photo’s content and context.
The standard practice for describing photos to people who use screen readers is the use of alt text. Alt-text is a concise text-based description that’s used by screen readers to describe an image. Unfortunately, adding alt text takes time, and a little skill and the vast majority of photos are posted by individuals who don’t know about alt text, or know how to use it, or even why it’s needed. Facebook created Automatic Alt Text to help to address this challenge on the platform. The original version of Automatic Alt Text launched in 2016, that’s when I joined Facebook and could recognize 100 common concepts, like “tree,” “mountain,” and “outdoors.” But we knew there was more than Automatic Alt Text needed to do, and the next logical step was to expand the number of recognizable objects and concepts and refine how we presented them.
So in January 2021, we announced the next generation of automatic alt text, which is now able to recognize over 1,200 objects and concepts – more than 10 times as many as before, including highly requested concepts like landmarks, and more types of sports, breeds of dogs, and various types of food. All of which had been requested. We accomplished this by training the Automatic Alt Text model on billions of public Instagram images and their hashtags. Alt-text descriptions are available in 45 different languages, helping to ensure that it is useful to people all around the globe. As of March 2021, more than 80% of the images displayed on Facebook and Instagram now contain automatic alt text.
I am also pleased to announce that Facebook is building social audio into the core experience of our family of apps, and has just rolled out Live Audio Rooms in the US. Live Audio Rooms is a new experience that lets people discover, listen in on, and join live conversations with public figures and select Facebook Groups. Although it’s an audio experience, we are including captions to make the experience accessible.
Another area I want to touch on is User Feedback & Research. I want to call attention to the importance of the continuing dialog we have with organizations representing people with disabilities, and the feedback we receive from our users in developing our accessibility tools and promoting the accessibility of our products.
For example, our accessibility research team went on a nine-day deep dive in Brazil to learn more about how being deaf or using sign languages changes a person’s interactions with social media. We learned that a large number of Brazilian people are having deep conversations through sign language by sharing video clips. This research directly informed our thinking around hands-free recording in-camera experiences, and around the specific needs that sign communities may have for technology. Research like this helps us to understand more about how technology breaks down barriers to equality, and how Facebook can help. We are continually in conversation with the community, both in soliciting feedback from persons with disabilities about new products and accessibility tools and in maintaining dialogue with disability advocates and rights groups.
Next, I’d like to talk about an emerging area of focus for us, which is XR and Accessibility. XR or Extended Reality – includes AR, VR, and mixed reality. I’m very excited that one of my Facebook colleagues will also be participating in a panel about XR gaming through the TDI conference. We take accessibility seriously with all products, and ARVR is no exception.
XR is a new and emerging technology, and accessibility is likewise an evolving area as well. We are working to understand how to best provide great, inclusive experiences. On June 17th, we released Oculus Version 30, which includes new accessibility features to make the future of computing a better experience for everyone. With version 30, we’re introducing a brand new Accessibility tab into the Oculus Settings menu. There, you’ll find tools and features that enable you to customize your VR experience.
We’ve moved some existing features under this new Accessibility tab, such as the ability to change your headset’s default text size. But we’re adding two new features with version 30 as well. The first is Color Correction, which is a system-level display setting that increases the legibility of colors that are commonly difficult to differentiate. With this technology, you can switch between three options: green-red, red-green, and blue-yellow.
Another much-requested feature enables you to experience VR from a “standing” vantage point even while physically seated. This “Raise View” feature raises your viewing height by 16 inches in supported apps. “Raise View” is still an experimental feature so let us know your feedback if you try it.
As you can tell, many elements of technology needed to build out mixed reality do not exist yet – and neither do many of the norms around their responsible development and use. That’s why we work with organizations to research, document, and distribute public XR Accessibility guidelines for developers in partnership with the XR Association, of which we are a founding member. The guidance offers developers a set of industry-backed best practices for developing accessible XR software that enhances experiences for all users, not just those with disabilities. We are actively contributing and participating in working groups with the community and industry-level organizations such as the World Wide Web Consortium and XR Access that promote responsible development and adoption of virtual, augmented, and mixed reality.
For Oculus, we issued our own set of technical recommendations, called Virtual Reality Checks, that are designed to help developers create more accessible VR apps. Moreover, internally at Facebook, we created an ARVR accessibility task force to harness the interest and talents of individuals across the business who want to improve the accessibility of our products, software, and content.
We already see XR technologies benefiting and being built for persons with disabilities specifically. There’s been a lot of discussion about the potential of VR technology for people with disabilities, including mobility-related disabilities. Facebook’s Oculus headsets and equipment are bringing accessible experiences to life.
One example is the Anne Frank House VR experience on Oculus, which recreates in virtual reality the annex where Anne Frank and her family lived in hiding from 1942 to 1944. This experience uses cutting-edge visualization technology and also extensive historical research to open up the experience to an even wider audience in an immersive way. This is available from your own home, or anywhere, and the Anne Frank House is also offering this as an option to mobility-impaired visitors, letting those who are unable to climb the stairs see the Secret Annex for themselves.
Another great use case of VR is to create awareness and to build empathy. One experience called, Notes on Blindness, is a perfect example. It’s a story of a man who documented his vision loss in the process while he was going blind. He took his notes and created a VR experience that gives you a glimpse of what it is like to be blind. We’re excited in general to explore this new space of immersive computing and building in accessibility!
It’s also important to note that building products that are accessible require a workforce that can represent and advocate for the diversity of our users. At Facebook, disability inclusion is one of our top priorities. Our recruiting and accommodations teams offer support to candidates through the hiring process, and once they are hired, our accommodations program supports employees with disabilities. We also have an active Employee Resource Group for people with disabilities. Indeed, Facebook has been named one of the “Best Places to Work for Disability Inclusion” three years in a row by the American Association of Persons with Disabilities and Disability: IN’s Disability Equality Index.
In closing, I again want to thank TDI for the great opportunity to speak here today and more importantly for all of the fantastic work that you do and our collaboration as partners in this area. Thank you very much and congratulations on your conference. And I hope the rest of the conference is terrific. Thank you.
Conceptualize Inclusive Design
Gary Behm. Center on Access Technology
Transcript
Hello, my name is Gary Behm. I use the pronouns he, his, and him and I identify as a deaf, white male. I have graying, light brown hair and I am wearing glasses and a black polo shirt with the RIT/NTID logo embroidered in orange and white.
I worked as an engineer with IBM for 30 years. Currently, I am a Professor and Associate Vice President for NTID Academic Affairs. Also, I am the Director for the Center on Access Technology, or CAT, at RIT/NTID.
Allow me to expand on the importance of the Center on Access Technology. The Center on Access technology was established at NTID over 15 years ago with the purpose of addressing the challenges we as Deaf people face regarding accessibility to communication, information in the classroom, and various technologies. We wanted more research and understanding to improve accessibility for our Deaf and hard of hearing students. For example, often we have Deaf and hard of hearing students on the RIT campus who participate in hearing classes. Some students prefer to watch the lectures with Real Time Captioning. We ensure the captions are accurate and allow the student full access to the information so that individual can learn more effectively. Also, we want to be sure Deaf and hard of hearing students can interact with their hearing professors, peers, and friends. We want them to feel comfortable and that is why we make sure they have full access to communication. Historically, that research is in a constant state of improvement. We haven’t achieved the perfect solution as of yet, but research is ongoing.
As you are aware, Technology is rapidly changing. We need to make sure that the needs of Deaf and hard of hearing people are taken into consideration for future technology designs and that those designs meet the needs of Deaf and hard of hearing. We don’t want new technologies released only to find that they are not deaf-friendly, so we want to be part of the design process. For example, during the COVID pandemic, many people were instructed to stay home and work or go to school remotely. While video conferencing was a new, disorienting experience with a steep learning curve for most hearing people, video conferencing is a technology that Deaf and hard of hearing people had been using for many years. Now hearing people were forced to use video conferencing for work and school. Rarely did the hearing community reach out to ask the Deaf community about their experience using video conferencing technology to improve designs for the future. Hearing people rushed to develop new video conferencing platforms, but from the Deaf perspective, the new video conferencing platforms were not Deaf friendly and lacked accessibility – lacking things such as various camera perspectives and accurate captioning. Developers could have asked the deaf community how to improve video conferencing technology based on their experience. Alas, they did not. In the rush to release a product when COVID-19 hit, little regard was given to the deaf experience and needs. That’s a current example of new technology.
As we enter “the new normal” of post-COVID times, it will be interesting to see what happens with technology. Will the new normal include people who traditionally worked on campus but now continue to work remotely? How will the new normal affect the Deaf and hard of hearing community? Is remote work a good thing? We are in interesting times.
Image descriptions:
- Illustration of North America in light blue, with a white text over it that reads, “11 M”
- Illustration of a mobile phone in blue with its screen in a darker blue, with a white text over it that reads “+8M”
- A capitalized text is written in white saying “ZERO” with illustrations in opposing corners of the “interpreter” sign and a laptop with “CC” on it, both in blue.
Another example is the Mobile Phone. The technology in the modern phone is amazing. I remember my first phone. Now the phones are bigger, sharper, and more advanced! There are approximately 11 million Deaf and hard of hearing in the US. Of those, more than 8 million own a mobile phone. While there are many Deaf and hard-of-hearing mobile phone users, no one can use a native phone for accessibility. Native phones come with the ability to make calls, but require the Deaf user to install other apps to make calls and that is not very accessible. When a hearing person buys a phone it is ready to go, out of the box. When a Deaf person buys a phone, first they must download an app, add their accessible phone number, and now they have two different phone numbers, which can be confusing: one number is for texting, another is for calling a video relay service or a captioning service. It’s not uncommon for a Deaf person to have 2, 3, or 4 different phone numbers. In my opinion, that’s not very accessible.
I would like to discuss why it is important for companies to conceptualize Inclusive Design. For example, consider a curb ramp or it’s also called a curb cut. Curbs run along the edge of streets to keep traffic on the street. The curb is cut or ramped at intersections to allow people in wheelchairs to go from street level to sidewalks without abrupt drop-offs. Believe it or not, the curb ramp was developed in 1945 for injured veterans returning from war, it wasn’t designed specifically for wheelchairs. It was years later that groups of wheelchair users fought for more curb cuts to improve wheelchair access and now, they’re everywhere. Any new road will automatically have the curb ramp there. The design allows improved access for all. Even non-wheelchair persons benefit from the design. Truly a nice design and similar to the concept of a mobile phone design for Deaf and hard of hearing that would benefit all mobile phone users.
Another example is doorbells. I remember my first house had a doorbell connected to a chime. That didn’t help me. When someone came to the door and pushed the button the bell rang, but I couldn’t hear that. Fortunately, I knew a few deaf engineers who helped me hardwire a system in my house, so when a person pressed the doorbell lights would flash. The same system worked with the phone, when I got a call, the lights would flash. At that time I couldn’t just go to the store and buy a ready-made system to install for a doorbell/light because companies don’t think about the Deaf and hard of hearing population, it’s such a small number. The majority of people buying doorbells can hear a chime and I understand that, but the deaf need to know there’s someone at the door too. Now, years later, it’s really nice, you can buy a system like Hue, for example, so when someone presses the bell the Hue lights flash. I no longer need to design my own homemade system. I can go to the store and buy a system off the shelf like Hue or Smart Doorbell, bring it home, set it up, and it works. It’s a major stress reliever and it makes things more accessible for the future.
Now, why can’t the same design concept apply to a mobile phone making it more accessible for the Deaf and hard of hearing? The phone itself is very powerful with tremendous technology. Phones come with everything, why not add just a little more functionality to make it more accessible for the Deaf and hard of hearing?
For that reason, I have a project called “IRIS.” The focus is a simple concept. The phone has a dialer where a person types in a number to make a call. Sometimes the number is already on the phone so you just select the contact and the phone automatically dials. Deaf and hard of hearing don’t use that dialer. We use a separate app with a separate dialer and a unique dialer for each app. Why not incorporate the dialers we use into the phone’s dialer. It’s the same concept as the captions decoder on old TVs. I remember back in the 1970s we finally got captions on the TV after I bought a decoder box from Sears Department Store for $280. We brought this box home plugged it into the TV and finally, I could see captions. I was thrilled! Yes, they were captions, but it was better than nothing. There was a big box on the TV, but I didn’t care, I could finally follow what was happening on the TV. Now, many years later, we don’t see boxed on TVs and that’s because the decoder is now integrated into the TV itself. So, thank you to our Deaf community who advocated for that technology be built in. Now, we don’t worry about captions on the TV. When you buy a new TV, that technology is already there. You simply push “CC” on the remote and the captions pop up. Whereas before I had the added work of buying the separate box and adding it to the TV, similarly with the mobile phone I have to get an app and add it to my new phone to make it work for me. The concept is similar to the old decoders. The app has separate phone numbers, directories, and video mail. Why not just build it in. It’s so simple. So, we are researching that concept. Believe it or not, phone technology has enough power to do whatever we want it to. Now we need to make phone manufacturers agree phones need a redesign to meet our needs and that’s why we’re discussing that.
Now to expand on Project IRIS. We need a lot of experimentation and testing to make sure our project is feasible. The goal is a mobile phone with one number that can be used to call VRS, CTS, or 911. We want to be sure the technology is feasible. So, a group of Deaf is working together in a lab environment with outside companies to see if it’s possible that one phone number can place a call to VRS or a captioning service. We’ve been experimenting with that and, sure enough, it works beautifully. It really depends on how you set it up. For example, we want a single phone number for everything instead of the 2, 3, or 4 numbers: one for VRS, another for Captioning, etc. When I give my number to a friend it can be confusing. Do I give them my VRS number or my native phone number? And with texting, I don’t want to confuse that with my number for the videophone. We want to see if it’s possible to have only one number. We’ve been working for almost one year now on various parts of the project. One part is the phone carrier or network provider. Those are big companies. The second part is the phone itself and the phone manufacturer. For example, Samsung, LG, and Google make the phones and sell them to the phone carrier who rebrands the phone. We also work with several different captioning providers and VRS companies as well as different 911 systems. There are many parts involved. It’s simple when working in a lab where we control all the variables, but in real life, working with a variety of phone carriers is a major challenge. The point is, we’re not limited by the technology, all of this is technically possible, the challenge is getting all the parts to work together: meaning the phone carrier, CTS, and VRS. Often one part will depend on another part’s solution, but that solution depends on a different part’s solution and the relationships are very enmeshed and dependent on each other. That is why, historically, there has not been a successful one phone number solution. We are working hard to include the community to help us make this happen. Captions on the TV weren’t too difficult a problem to solve. There was the TV manufacturer and the decoder manufacturer and it was pretty simple getting those to work together. The mobile phone is a much more complex ecosystem with multiple parts involved. That’s why we want to discuss more how we push that project forward.
I want to discuss the mobile phone, but also other future technologies. Phone technology will continue to improve. New phone models are released every year or two and we have that phone to meet our accessibility needs. But we need to be mindful of other technologies. The internet of things for example. The Internet of Things or IoT is a strange name. Basically, IoT means the internet is on everything. Technology, has of late, truly taken off. The tiniest chip has the power of a full computer complete with its own IP Address, memory, and IO integration, and these tiny chips and are everywhere. IoT technology has exploded and can be found in the world of manufacturing, in homes, and in healthcare as examples. We need to be sure we are not overlooked as that technology progresses and that Deaf and hard of hearing are considered with new designs and technological solutions. We want to ensure the design is inclusive and that our observations are included in the IoT designs. The Internet of Things and those chips with the IP address enable your computer to connect to the device via WiFi. A simple example of IoT at home is any “Smart” appliance such as a dishwasher, washing machine, dryer, or stove. Now, I can buy and install a new stove, set the timer to cook for 45 minutes, and when it is done the stove will send a message to my phone. So, I can set the stove to cook and go do other things and when I get the alert on my phone, I know my food is ready. In the past deaf had to watch the time. If we didn’t pay attention the time would go over, the food would overcook, and dinner was burnt. But that concept of the appliance communicating with the phone is the IoT. A washing machine manufacturer installs a chip with an IP Address that connects to my phone via an app. Wow! Now that’s the future! Smart homes are the future. It is true IOT easily connects my phone to the washing machine and that benefits deaf people. In the past, there was nothing like this. I would start the washer in the morning and get busy during the day, leaving the wet clothes in the machine all day because I forgot to move them to the dryer. That happens sometimes, people get busy with other things, but the IoT phone app alerts me when the wash is done so I can move my clothes to the dryer. That is a benefit to the deaf community. That shows how the IoT helps deaf people but there is still more ideation and development to happen and I want to be on top of that to be sure Deaf and hard of hearing are included. That is important.
Now, related to IoT, a big topic is AI or Machine Learning. It’s another explosive field. The point of Machine Learning is to develop a smart system that can make appropriate decisions based on collected data. Many companies love to collect your data. They want to know more about you so they can sell more products to you or make a smarter system. One example is ASR or Automatic Speech Recognition a technology that has recently taken off. Really, ASR is an old technology developed back in the late 1950s that improved incrementally over the decades, and only recently has the technology taken off all of a sudden and many companies offer ASR. You may wonder how ASR benefits deaf people and that’s the challenging part. If you remember, when developing new ASR or new AI they need data, so companies collect lots of data, mostly from hearing people. Based on that data, they develop new smart systems, but do they include data from Deaf and hard of hearing? When data is collected from the deaf it is often deleted because it’s not “normal,” be we want to encourage companies to keep that data to help improve the lives of the Deaf and hard of hearing. That practice is called “data bias,” because it does not include the small amount of data from Deaf and hard of hearing as compared with the millions upon millions of other data points. We want to be sure that we are “counted” in their data collection. AI applied to the Deaf population has many benefits. For example, with ASR a hearing person can speak and the speech is translated into readable captions. But the big question is, “is the quality of the translation reliable?” ASR is a great idea, but does it meet the needs of the deaf community? Some deaf complain there are so many translation errors in ASR that they miss information and the conversation goes over their heads. That’s not good either, so we need ASR to meet their expectations. Of course, we will continue to need human captioners, but ASR is blowing up. Another future, very cool technology is Sign Recognition, or a system that recognizes your signing with AI. They need data so now they are collecting data around signing to help build a smart system. That is a big growth area as well. So, again, the future is exciting, but, at the same time, we want to make sure new innovations include us in their design process, in other words, Inclusive Design.
How do we, as a community, become more involved in the change of technology? That’s what I want to discuss. Thinking about the recent emergence of COVID. We survived thanks to technologies like video conferencing and we saw the progression of differing technologies. So, how can we get involved? We have ways. We have great conferences like the TDI conference to help us share ideas with TDI, but also other organizations as well. Obviously, we have NAD, HLA, and there are more out there and we need the communities to come together. We are responsible to gather together and discuss what is happening with technology. We have a habit of just accepting what we’re given, for example, the mobile phone. We accept it and add an app to it. After many years we finally have a solution for us and we accept it. But we need more. We need improvements and if we get involved we can encourage those changes. I believe that. If we analyze the mobile phone we can make changes. If we do nothing, the phones will stay the same and we will continue to add apps, have three different numbers, and accept what we’re given. That’s okay. It will work, but it could be more. I think it’s important through organizations like TDI that our concerns are recognized and we ask, “Hey, what about this?” That enables TDI to represent us to large corporations and advocate for solutions that companies typically overlook. I understand the company’s priority is to make money so they tend not to focus on disabilities, but they are improving. But, we can be more actively involved with communicating our needs to be recognized by large companies. Perhaps solutions to our needs will be included in the next product cycle. I realize that nothing happens overnight. Changes to smartphones will take a couple of years to implement, I recognize that it requires time and that’s why it’s important to come together. NAD is a great organization and it is very supportive of our community and we need to let them know when something is not accessible or we’re not satisfied with the captions, or education is not sufficient, whatever the concern. It is our responsibility to get together and have good discussions, not to criticize their products, but there is always room for improvement. We need to continue to work together. Hopefully, during this conference, we can come together and discuss new technologies. I myself am an engineer, but I learn new things every day. I have to keep up with new technology. Some of us may have given up on trying to keep up and I understand that, but it’s important to remember it is our responsibility to keep up with technology and ask if it’s meeting our needs. If yes, great. If not, “what are we going to do about it?” Realize that there is power in the collective group compared to an individual. If I go to a big company as an individual, it’s easy for a company to ignore me. But, if TDI or NAD collectively express our needs to a corporation they will cooperate with us. TDI is an important conference for us to move forward with new technology for the future.
Thank you.
Creating Communication Equity
Chris Soukup, CSD
Transcript
Good afternoon!
It is a joy to return to support the TDI Conference. I want to open by recognizing the TDI Board and CEO Eric Kaika for their important advocacy work over the past two years.
Thank you.
This month we celebrate the 31st anniversary of the Americans with Disabilities Act, a law that has provided protection from discrimination in employment, education, and public space. With it, phrases like “reasonable accommodations” and “functional equivalency” became widely known and familiar accessibility terms.
In the 1990s, this created pathways for the first form of relay services via TTY. In this era, deaf using TTYs to make calls through relay was considered “functional equivalency” even though I am sure most would agree the communications experience was significantly different.
Thus, functionally equivalent has in practice been driven by the best available technology… the best available solution that can be offered to the deaf community to provide a somewhat comparable and somewhat accessible form of communications.
In the 1990s, best availability meant TTY, but there was a lot of technology that simply didn’t exist in 1990. Most significant being high-speed internet, video, and mobile technology. [another list of examples]
Today we have smartphones with video calling capabilities that fit comfortably in our pockets.
Technology continues to evolve with emerging solutions that incorporate voice and sign recognition. But laws and common practices have not kept pace with the rapidly changing world.
As advocates we now have three distinct areas of attention:
1) safeguarding the integrity of existing solutions
2) advocating for continuous innovation to push further and
3) ensuring that emerging technology is truly better and that it does not break systems that we’ve worked for decades to build.
Let’s talk a little more about the challenges with the term functional equivalency. The deaf community normally signs FE as “functionally equal”, but that’s not really accurate. A more accurate way to think about functional equivalence is “functionally equal substitute or replacement.” When you think about it from that perspective…is that really the best we can do?
Do we consider FER a beautiful end state or do we see it as a minimum? We finished it, now check the box.
If we see it as a basic minimum to comply with the law, then how do we describe that bigger more perfect communication experience?
Moving beyond FER demands a mindset shift and a new mindset needs a new language. Today we propose the community considers adopting the use of communication equity as a stronger goal.
Communication Equity does not accept substitutes. CE seeks a complete removal of barriers. CE evokes the true spirit of the ADA… a communications experience for the deaf that is the same as the experience for non-deaf. Communication Equity recognizes that there are many different ways to be deaf and that there is a broad continuum of identities that make up our deaf community.
Increasingly, solutions must be more customizable to fit the specific ways that a deaf person prefers to interact with the world. Community Equity encourages movement towards solutions that reduce and eliminate the distance between a deaf caller and the party they are calling. The use of intermediaries should be eliminated entirely whenever possible. Give deaf people the opportunity to communicate with the party they are calling directly in their native language whenever possible and it is achievable 80-90% of the time.
Make services more directly available in ASL for sign language users. Two clear and emerging applications are customer service and telehealth.
FER vs CE: a comparative example: Some companies have worked to engineer solutions that connect deaf customers to their business through VRS or VRI. This provides basic access and exceeds the minimum standard required under the ADA so we could say this solution is FER but is it delivering an experience that is truly equal? Is this the goal?
Increasingly, we are seeing wonderful commitment from corporations to provide direct communication: opportunities for the deaf to connect with company representatives directly in ASL. This commitment not only provides a truly equal communications experience (CE) but also has the benefit of creating good-paying jobs for deaf people who have as a community dealt with chronic under-and unemployment. The solution is elegant in its simplicity and we now have all the technology we need to deliver this same experience across many different industries: government, travel, banking and credit cards, retail, technology service providers, etc.
CSD estimates that 80-90% of all customer service interactions currently supported by intermediaries can significantly improve by moving to direct communication.
CE does not disregard older solutions, but becomes a mechanism for continuously pushing to reduce the gap… eliminate the gap that separates deaf people from the rest of the world. CE should be the space in which new solutions are designed…if we make that mindset the goal, the solutions will be truly transformative rather than a slight improvement over the last. So these slight improvements continue getting improved and even working backward, we believe that this is the ultimate realization of a quality communication experience.
Technology is power and with power comes great responsibility. There will always be market pressure to move towards automation and the lowest cost solutions that meet FER. The risk to the deaf community is a shift before the technology matches the quality and integrity of services today.
Remember the three distinct responsibilities:
1) safeguard the past
2) challenge the idea of what is possible
3) ensure that innovation is BETTER, not just CHEAPER.
I think every day about a world where deaf can use innovative technology and innovative simple approaches to service to move about their daily lives as equals. This will be when the world fully shifts its perspective of deaf people, realizing and recognizing our voluminous talent and abilities as equals.
Thank you for giving me your time and attention to offer this truly beautiful vision for our community to all of you.
Inequities in Technology
Leah Cox, UNC
Transcript
>> LEAH COX: Okay. Good afternoon. Erin, are you with me?
Okay. I am here to answer any possible questions that you have or have that dialogue and discussion about this discussion around race, racism, audism, intersectionality, and technology. Thank you, Beth. And thank you, Debbie. I hope that, um, some of the thoughts that were presented to you bring about a that you of how you can engage and change. All right? How can you engage in provoking conversation where you work with your friends, with your family, with your organizations around some of these issues, and how to become more anti-racist in the things that we do. Okay.
So let me get to the Q&A. So how does — how do you see the deaf community, ASL, improving our knowledge awareness of racism and audism within our community? So I think the people have to be engaged in doing the work, right? Educating yourself and especially in the deaf community, I think often we focus on one, identity versus all identities that all of us have. Right? It’s not just about being, ah, deaf but also looking at being male, female, gender binary, all of those things. Are you a part of the LGBTQ community? Are you part of an aging community? All of those things matter. Right? So taking the time to educate ourselves will make a difference. What is the name of the book I mentioned? Let me go back. It is called Black Software, the internet and racial justice from the Afro net to Black Lives Matter. And the book is by Carlton Mcwin, it’s M-c-L-l-w-a-i-n. Someone asked, is there a resource such as a commute department or a think tank that specializes in gathering data on disparities in the world of information and communication technology that can inform TDI’s work? At this time, I am not certain, but there are lots of universities especially those universities that have departments that are focused on data science, data analytics and just using some of that information that, um, some of them may be doing some work. Gallaudet may be one of them or RIT. Okay. Next question.
Why do you think many hearing doesn’t recognize the word audism? I think it’s just not a word that folks are using. It’s not one in hearing circles that folks have thought about. Folks are more engaged in these conversations of race and are missing all the other identities out there and I think it’s really important that we start recognizing, um, the use of audism and how it affects all our community. (inaudible) reading is not a very fun activity to do. Sometimes we need an organization like deafhood which focuses on identity, deaf pride, et cetera, to expand on the repertoire of teaching and group facilitation practices. It has come within the deaf community itself. Yes. And I get it. Some folks don’t enjoy reading. Right? But there are organizations out there that can help create more awareness and if you’re not a fan of reading, engage yourself in those community organizations that are addressing some of these issues. Thank you, Suzanne. That’s a great suggestion.
Next one. Love your examples of black students being dissuaded from taking STEM classes and targeting members of the black community. How many of these policies be modified for the betterment of everyone? How may they be? How may the policies be modified for the betterment of everyone? I think, ah, you know, wherever we are in the communities that we are in, ah, some of our congress folk are the local legislature, we need to be a part of that. Right? If we’re not, these things don’t happen. We are forgotten in our communities. So getting folks engaged in those conversations, who is talking about these things, what kind of legislation is taking place, even in places where you work, right? What kind of technology are they using? Is it being helpful? Is it hurting us? Is it preventing us from doing the things we knew? Is there bias? Right? So we know that in? Of our facial recognition, the software out there, black folks are not even seen sometimes, right? The things that happen with those and/or profiling, right? So being a part of the conversation, getting engaged in your local government and community policies, commissions, all of those things make a difference.
So the question is: Here in New Jersey, are hearing loss associations, state associations, and chapters diverse? We made sure to make it so, always inclusive. We’re a great bunch all welcome. Come to our 14th annual walking for hearing in October. Living a little bit far from New Jersey, but if I can make it, I absolutely would. I think its great folks are engaging and their state and local chapters and that they’re diverse because a lot of times we can’t find those places to engage in the work and to (inaudible). So thank you for sharing that. Statement. Any checklists for assisting telecommunication service providers for audism and racism that impact on deaf people and telecommunication and communication technology trees for the purpose of audism, racism prevention awareness? Okay. So the checklist would probably start with who is sharing. Right? Where are we sharing this information and with the definition for all of those engaged, we have to be on the same page. Are we making sure that we’re understanding that everybody in the group is understanding the definitions around race and racism, right? Power and privilege. Audism, you know, deaf and hard of hearing, what is deaf culture. What does that sound like, along with like, and what does that mean for everybody? And then talking about technologies, you know, what are all of the buzz words that folks need to understand and then linking those to how do they affect all of us in terms of our identities. And especially in terms of race, hearing, all of those things. Right? Making sure that we start that list with just the basic list of definitions first. Because that will engage you in the discussion of understanding and we each have to be on the same page before we can move forward. Right? And then a list of what are the technologies are? What are the communication technologies that you use in your work and in your everyday life and wherever you have seen some problems? Right? Those are affecting you as an individual or as a group and starting that list and you may equipment up with some. You may come up with a lot. You may come up with just a few, but you have to start somewhere. Okay? Okay.
So, ah, deafhood — I don’t think who that is, but maybe watching this segment of TDI and so there are folks interested in being an ally with the black, brown, and AGI people, but I don’t know-how. And that may not be a question I can answer for you other than contacting folks, reaching out, getting to know the deaf and hard of hearing members in your community, getting to know those who are black and brown deaf individuals in your community. There are probably organizations and I’m guessing that if you start doing some searching and googling, um, you will find those organizations and then reaching out to make some connections. And I think it’s important that we do that. Everyone needs an ally, right? Not just black and brown folks, Indigenous folks, Asian American folks, aging deaf folks, all of us need an ally. LGBTQ, everybody needs an ally. So it’s about us supporting each other. Okay? All right.
There seems to be a trend for a onism to decrease emphasis in the discourse talking about a particular ISM, like font play identity politics, not practice discrimination Olympics. Is that a trend good thing or a bad thing? Look. There shouldn’t be a hierarchy of oppressions. All right? That’s what we don’t want to do. Right? Never. Right? Each of us has oppression that faces us in multiple ways all the time. But we can’t move forward if we’re attempting to one-up onism each other. Right? So my thought is it’s not a good thing. It is a trend. We know that there’s also a cancel culture out there, right? We see it on social media where everyone’s canceling out someone because of their identity. It’s not what we want to do and for those of us who are in communities that are often targeted or oppressed or pushed out, we need each other’s allies. So the onism, the one up doesn’t work for anybody. So my suggestion is don’t engage and about you hear it, when you see it, call it out and then help educate. Right? Don’t let folks just get away with it, you know, oppressing each other. Give them a road map as to how we can make things different. Okay? Thanks for the high five. Back at you. Okay. The first time I called FCC for an ASL complaint, wow, the wide pages had a lot of information. Each company hard to write complaint also TDI biannual conference has a lot of companies to support. I wonder they did not pen money contracts. How do they spare without any costs? That’s a question I can’t answer for you. That would be a question for TDI. Okay? Many of the members of TDI are hard of hearing people who may have grown up hard of hearing or may have lost their hearing later in life. We can experience tremendous difficulties communicating with other people and sometimes we can experience tremendous discrimination. Most hard-of-hearing people are not fluent in ASL at all and can be completely unable to understand what is being communicated at large outdoor events like the BLM protests. Access needs to be provided for all and I would agree. Right? And we recognize that figuring out how to communicate with everyone can be difficult, but it is something we need to engage in as much as we can. Right? Everyone is not going to be the ASL signer. They have not grown up in deaf culture, deaf community. For someone who is hearing, I have been blessed enough to learn sign language in ASL. I still don’t feel I can always keep up as well as I’d like to and so I look for multiple ways of communicating depending on the setting. So I do understand. So how many black people are at this fantastic conference? I wish I could answer, but I don’t know. Those are numbers that open TDI would have — that only TDI would have.
The other one is would you share the road map with us as you said recently? Absolutely. Can we just think about again starting with definitions? Hoping everyone get on the same page with what is race, what is racism, what is — what are the issues around anti-racism? Can we talk about discrimination, deaf, audism, hard of hearing, deaf culture? All of those things are important as we have this conversation about becoming more anti-racist, becoming more anti-audism. We need to make sure we’re talking about the same thing. So sharing and educating each other and then looking at communication, right? What’s out there, what do you use, how is it identifying you, is it useful and making that list of how it’s used, how it seems to be used for, and you what is missing, right? So kind of our pros and cons list to help you begin that discussion about what’s needed and what’s missing. Right. Okay?
All right. I am just checking to make sure that no more questions — and just so you know, I’m very excited to have been invited to be a part of this conference. So thank you TDI for inviting me and even considering the topic of this small workshop here, this session. It’s important. I think that oftentimes folks forget the importance of talking about these issues around race and audism and racism and anti-racism, but it does affect all of us in our communities. I’m just happy to know that these are being issues that we’re willing to address from this platform. So thank you. And yes. My ASL interpreter is the bomb. Thank you. Thank you. Thank you, Erin. Thank you. And yes. I wish I could focus on Erin too. She’s moving. I love it. Thank you, Erin. Yes. I will make sure that, um, I pass on the names of the books so that TDI can post. Yes. So I have a consulting company and it’s called Acquitably Yours. So you can Google and come up with it and, ah, I do some consultant work on the side once in a while when I have a spare moment. But I love taking the time to come in and meet with folks to talk with them about these issues around race, racism, anti-racism, implicit bias, discrimination, you know, working with the community, all those things.
Okay. We have about 5 minutes or so left. I am happy to take any other additional questions. Nancy, I’m going to ask for someone from TDI to answer your question because I don’t know how many black individuals are a part of the conference and/or this small segment. All right? All right. I don’t know that I can answer the live transcription for iPhone users. Again, that would be a TDI response.
Yes. There are probably ways that I even thought about in terms of outreach or working creatively within the community and probably the folks that you know maybe better at engaging that were in your community. But we do need to include everyone. Right? It’s about bringing folk in, not pushing folk out.
Okay. Again, I want to say thank you. It looks like we’re out of all of the questions. But thank you again for the invitation to talk with you and I wish you the best as you enjoy the rest of your conference. And have a great day. Bye-bye.
Gaming for Access
Brandon Chan, Wendy Dannels, Mari Kyle, Chris Robinson, and Mei Kennedy
Transcript
>> MEI KENNEDY: Hello, everybody, thank you for joining us for the last day of the TDI conference.
This whole week we’ve had different events and today I’m looking forward to getting into gaming and we’re going to have three panelists join us today.
And before we turn it over and we do our introductions I’m going to do a visual description of myself.
I’m a female.
I’m biracial.
Brown skin, brown hair that’s long.
I’m wearing a blue shirt with a gray jacket.
I’m really excited to have these three presenters join us today.
So we have Wendy Dannels, Mari Kyle, and (saying name) and they’re from gamers TV, and all three of you if you don’t mind just going up and putting on your videos and doing your introductions.
>> MARI KYLE: Sure I can hop in my name is Mari Kyle and I’m a game producer at Oculus studios at Facebook and I’m really excited to be here.
A lot of my work deals with accessible and games and previous to being a game producer, I was on the store operations team where I reviewed games to make sure that they were fit for publishing on a quest and Go.
I can do a visual description as well.
I am a female.
I have brown hair and am wearing a denim button-up shirt.
And, yeah, I’m really excited to be here.
>> MEI KENNEDY: Hi, thank you, Mari.
Wendy, I’m going to turn it over to you.
>> WENDY DANNELS: Yes, hello, everybody.
My name is Wendy Dannels and I’m at the research facility at RIT, one of the colleges at NTID at the center of culture and language.
I am a female, I have white skin, I have dark brown hair.
I’m wearing a short-sleeve gray polo with the orange lettering RIT on the shirt.
Thank you for inviting me here today.
>> MEI KENNEDY: Thank you, I’m going to turn it over to Chris.
>> CHRIS ROBINSON: Hi, this is Chris, I’m on deaf gamer TV.
My brand is deaf accessibility on games.
So we’re talking about captioning in games or videos.
Also, I used to be — I graduated from RIT, I believe it was in ’06, that was back in the day so it’s nice to see you all here.
>> MEI KENNEDY: Thank you for joining us today.
So I would like to start by asking Chris you and your friend has created a video and it’s a wonderful video so before we show that video I wanted to ask you just real quickly for a discussion here about your experience as a deaf gamer and what you’ve confronted and what challenges you’ve had regarding connects so just share a brief experience about that before we share your video.
>> CHRIS ROBINSON: Absolutely yeah, I grew up with games ins I was 3 and I started playing Mario, Mario 3 and I saw my brothers playing that and I was really wanting to know what was going on, at the time I was the only deaf person in my family, no signed, there was not a lot of communication going on in the family in those earlier days so I was like oh, that’s cool and I started to really like gaming.
It doesn’t really require a lot of social needs so as far as communicating within a game you just play it and I enjoyed myself and now ever since gaming has changed especially with technology, you can play with other people online and the biggest barrier we see with deaf gamers is communication.
Of course, accessibility is what you can see on the screen.
Like some games have audio cues but being deaf, how do I know what’s going on with the audio cues?
I’m just going through the game walking around and not picking up on the cues.
I started this thing called DeafGamersTV and I wanted to show what it’s like as a deaf gamer and some of the struggles we deal with gaming while other people enjoy gaming and it feels like it’s not fair so I wanted to show everybody how it feels emotionally to — for the game developers to take a look at this and start realizing about things they may not have understood of and they can see the deaf perspective when it comes to gaming.
>> MEI KENNEDY: I had one more question I’d like to ask before we show that video.
I’m curious, would you mind sharing your experience regarding gaming with hearing gamers, how do you communicate, what are some tips for other gamers for communicating with other hearing gamers?
>> CHRIS ROBINSON: Hanging out with hearing people and my friends and a variety of other people, most of the time there may be barriers, right, because deaf people you don’t — maybe some hearing people may not want to cooperate with you, they feel it’s so much work and they have to type.
What I’ll do is I’ll just use Skype for chat and I’ll do that and they have this new thing called Discord and that’s like — it’s just — it’s a better version of Skype so you can get on there on Discord, you can do video chats, you can chat through text using DM.
There are some other things too.
Oh, and now there’s speech-to-text and text-to-speech type of thing so if you’re playing — if I’m playing with some hearing friends, if they speak it will actually do speech-to-text so I can read it in English or whatever you’re using in that setting.
Most of my friends will do speech-to-text because even if I use my hearing aid it’s not really clear and I keep interrupting people saying what whenever they say something, I know that can get — it can interfere with enjoying the game so now when I type with my friends I’ll look over and sometimes they’ll, you know, say something that I wanted to say but they’ll say to the other teammates for me.
It doesn’t always happen that way so there are still common barriers that we still have in place but, yeah, those are some tips that if you’re playing with hearing gamers, if they’re willing to type, that’s the best way to communicate unless they know sign.
If they can sign and we can understand each other that’s so much easier.
>> MEI KENNEDY: Mei here, gotcha.
So before we show that video, I wanted to ask Mari if you have anything to add about the experience of playing the game.
Later on, we’re going to talk about your individual work but just experience on gaming, is there anything that you Mari or Wendy wanted to add to that?
>> MARI KYLE: Sure, I’d love to adhere.
For gaming in particular there are things you can do as a game developer that will improve the experience for people who need accessibility features and people who maybe do not usually need accessibility features but can still benefit from them.
For me in particular, I actually had suffered from a traumatic brain injury a few years ago which means that for me I use captions and I use visual cues for sound to pour situations quickly.
If I’m getting attacked by multiple enemies it takes me all these visual cues to really understand what’s going on and that’s a completely different problem than what deaf or hard-of-hearing players have but still, in adding these visual cues game developers can level the playing field and make it better for both of us to play the game rather than, you know, just improving the experience for one group or just improving it for the other.
And so these accessibility features become, in general, just good design principles.
Making accessible games is not, you know, this extra, you know, mysterious way of game designing, it’s just good game design and it’s just a way to make the experience better for everybody.
>> MEI KENNEDY: Awesome.
Wendy.
>> WENDY DANNELS: I wanted to add one other example that we can all consider.
Some many organizations and associations advocate accessibility for the deaf and hard of hearing and other mediums who are disabled.
And they’re wonderful.
But specifically related to gaming, there’s something that’s called Abled Gamer and there’s a website I think there are over 3,000 members that are on Florida and they’re all advocating, they’re all on there and, actually, if you go to — there’s an interesting card where it forces the web developer or the people who — people who develop games to consider disabled people, thinking about what kind of controls you have to include or they just ask questions, did you think about the audio, did you think of something related to anything to do with the hands.
And if you read — it’s like a scorecard, you read it, they check out the box, you make sure they do everything.
One example of using the scorecard is one way we can promote people to become part of an inclusive community.
>> CHRIS ROBINSON: I totally agree.
>> MEI KENNEDY: Wonderful.
Thank you for sharing that.
In the chat, we just posted the link so you can go and check it out on that website.
OK.
So I would like to go ahead and share that video that Chris and his friend Brandon have created.
It’s a wonderful video so here we go.
>> Great.
I hope everyone enjoys it.
>> MEI KENNEDY: OK, so we’re going to turn off all the cameras and then we’ll show the video.
>> Many people don’t often think that they’ll meet a deaf person who doesn’t speak, especially during gaming — (see captions on video).
>> INTERPRETER: This is Chris giving directions on the screen and he sped it up.
ZX2X, giving directions.
Instructions on the game.
>> I’m actually impressed with this game.
Mission accomplished.
(See captions on video).
>> MEI KENNEDY: Hey, everyone, welcome back!
All right.
Everyone come back?
All right.
We have Wendy and — OK, great.
That was a wonderful video.
I watched it all and it was amazing.
I think it covers so much.
It has so many examples.
It shows the feeling to just — and you can actually see everything.
And what Brandon said, it’s almost 2022, c’mon, we need more accessibility, I mean, we need to start spreading awareness.
Did you have something you wanted to add, Mari?
>> CHRIS ROBINSON: Chris here, yes, I saw some questions being asked.
Let me look at that.
So who in the gaming industry makes video games?
So several different companies actually make the games, like Sony, Nintendo, Xbox, there are several different companies.
And some NDs which means, like, a studio.
It’s not, you know, top-of-the-line A-game.
But there are several different companies out there.
>> MEI KENNEDY: Yeah.
And I’m looking at the next question right now that they’re asking.
It has to do with Oculus, they want to know more about that so I’m going to let Mari make some comments and I’ll turn it over to Wendy.
Mari, do you mind sharing some information about Oculus and research that’s been done?
>> MARI KYLE: I’d love to.
To read the question out: It was I’d love to learn about accessibility in VR, oculus specifically.
I can give you an update in the past year which has been the most recent update we’ve done publicly so last year in November we launched a pretty robust set of developer documentation that walks developers through how to design accessible games.
We also released a video tutorial that walks through the concepts and kind of narrates it out in case, you know, the documentation is too much to sit through.
And we also released a set of virtual reality requirements which are basically a series of checks that developers have to, you know, either meet or are, you know, or have to be made aware of before they can publish on the quest, rift, or go store.
These checks are things like does your game have subtitles, does your game have colorblindness settings, is it playable from a seated position, does your game, you know, use certain visual cues and context cues.
But these requirements were really put into place to make sure that we have a consistent level of standard of accessibility across our applications.
>> CHRIS “PHOENIX” ROBINSON: Currently, they are recommended because we wanted to use this time as a runway so that developers can learn how to implement them, and then we are planning on making these required across all applications soon.
Other advancements that we’ve made at Oculus include system-level solutions for accessibility features like captioning, colorblindness settings, locomotion settings, height settings, and more.
We’re working on a ton 6 different ways that we as a platform can take on the responsibility of creating these accessibility features into the hardware so that developers don’t have to find ways to make their game consistent with other developers or other games, like, we just have these available for them to use and plug into their games.
In addition, we are exploring a lot of things right now and are in the process of building out things that I’m really excited about including using hand tracking to have, you know, conversion from ASL to — or sign language in general to speech or text within the headset.
We currently launched or recently launched hand tracking a few updates ago and we’re really looking at how to most utilize that for deaf or hard-of-hearing gamers.
And we have a plethora of other things as well that we’re interested in exploring and providing solutions for.
Some of these are how to provide, you know, captioning for massive, you know, multi-player VR experiences where people are standing right in front of you.
How do you layer the captions when multiple people are talking at once from different directions.
So we’re exploring a lot of topics.
There’s a lot of interesting work going on.
>> MEI KENNEDY: Wow, I’m seeing a whole slew of questions coming in right now but I want the audience to know we will get back to you.
I would just like to have Wendy share her research and work before we proceed and let the audience ask more questions.
Wendy?
>> WENDY DANNELS: Gamers are great, but don’t forget other groups.
So the entertainment centers, museums, science centers.
So I’ll give you one example.
In 2017 I went to Copenhagen, in Denmark.
And there was an amusement park where you could put on a VR headset while you’re on a roller coaster and you had a headset on, and you’re literally going up and down the roller coaster, you have no idea what’s going on.
That was a ride.
And at the time it was inaccessible, which is understandable, but four years later, actually this week there’s a Van Gogh museum and there’s a lot of different events that are going on in New York City and they’re providing VR experiences for specific exhibits.
It’s an immersive experience.
But they don’t provide captioning.
It’s just not there yet and it’s unfortunate so I’m hoping to look forward in the future where this is going to be developed.
My research is focused on mixed reality which means there’s something that you can see through as well as you could see that augmented reality that’s popping up on your screen so let me give you an example.
So I can see you all right now.
So the research, the goal for the research is to provide real-time captioning and real-time ASL interpreting through these glasses so you can see them on the screen.
So let’s say you go to a planetarium and you’re looking up at the stars at a science center.
If an interpreter’s in front of you while you’re looking up at the stars on the ceiling and something’s happening you have to look down, look up, you’re missing all of this information.
So the goal is to remove all of these barriers where you’re going, oh, I wasn’t able to see that and now you have the glasses on, you can see the interpreter signing as you’re looking up at the stars through that augmented reality.
And you’re going to miss a lot less out of your life.
So several students are working with me this summer, they’re extremely busy developing a lot of different approaches to address this issue.
And it’s not only with the mixed reality like these glasses that I just showed you.
We actually are having these students work on a hollow lens two and it’s amazing.
So you put it on and you can actually move things that are around in your virtual space.
Again, it’s completely accessible.
And that’s something that we all together have to work hard towards.
>> MEI KENNEDY: Mei here.
Wow, that’s definitely next-level generation stuff.
Now that we’re moving more to holographic and seeing all of these different technologies come up, that’s amazing.
So we’ve got more questions but I’m going to hold off on them because we’ve got several questions from the audience.
So someone has asked, I’m curious if you’re aware of what the percentage is of video games that are inaccessible versus accessible.
Does anyone know the answer to that?
>> CHRIS ROBINSON: So right now I would assume — definitely not 50/50.
But I know some gaming companies are trying to — they’re trying to improve.
I worked with some gaming companies out there who have already tried, they’re already reaching out to different disability communities asking for feedback, asking how to make certain things happen and they want to understand so they can move forward and make some of these things happen.
And I would say one of the best gaming companies that just made me feel so invested, they’re called the last of us two.
That game, I feel so engaged the last of us two because there are so many options for accessibility features so they have features for people who are deaf, who are — have a vision — low vision, and so there’s a lot of things that are going on.
It’s the first game that I’ve ever seen that has all of the options that are out there.
I would say there was a couple that had some, and I would understand — I honestly understand that some of the times they just — don’t have enough time to put all of these in.
And most of the time, especially with game design, this is the last thing that people consider instead of integrating it throughout the whole design process.
And things are changing now.
So this is the first time that game designers are starting with this at top of mind when they’re going through the design process instead of thinking about it last minute and sticking it in so it’s really nice to see the gaming industry slowly — I mean, it’s catching up.
It’s slow.
And, again, I don’t know, I couldn’t give you a number as far as the percentages go.
>> MEI KENNEDY: Yeah, it’s hard to say.
>> CHRIS ROBINSON: Yeah, yeah, no, most games are not accessible, yeah, it’s moving but it’s — yeah, it’s not there yet.
>> MEI KENNEDY: So one of the themes that I’ve been seeing reoccur this week is that we need to design it from the beginning, not as an afterthought.
And I’m glad you mentioned that, that it needs to be part of the design from the beginning and we’d see much more of an impact, much more potential from the beginning.
So the next question I have here, let’s see, so someone asked: Are some games more accessible, and, Chris, you just mentioned a game.
Wendy, do you want to mention something?
>> WENDY DANNELS: Yeah, I want to add another example when it comes to that word “accessibility,” you can have something that’s partially accessible or you can have something that’s completely accessible and there’s a big range that falls in that, in that gamut. So when you say “accessible,” we don’t know where they land.
And so how you define accessibility is the question and that’s something that I want everyone to think about.
There’s one company, it’s actually a group called Owl survey that made a game called the Vatican, and that game is more of a journey experience and it’s where — it’s subtitled.
And they did include deaf and hard-of-hearing people in the process.
And so I have to give some credit to them for being inclusive.
And this was a couple of years ago that they did this.
But, again, it’s a lot more work is needed to be done.
>> MEI KENNEDY: Yeah, I was actually —
>> Next question.
>> MARI KYLE: I’m sorry.
I wanted to hop in. I also recommend Alchemy as well as one of the most successful VR developers out there in the industry right now.
They do a ton of user testing really early on with groups that require accessibility features so their games like vacation simulator or job simulator include VR superpowers, where they really allow folks who need accessibility features to have kind of like a, you know, a more fulfilling experience with these VR games than, you know, than other games in general.
I really enjoy their work as well.
>> MEI KENNEDY: Thank you.
So we’ve got another question here: Someone asked: Regarding the FCC filing, I’m not sure if this applies to y’all if y’all have had any experiences filing to the FCC?
No?
No?
We can move down to the next question unless Wendy you have something.
>> WENDY DANNELS: This is Wendy.
When you file with the FCC you have to keep in mind everything that impacts whatever relates to public accommodation.
So whatever prevents accommodations you file it with them and they’ll put it into action.
So anything regarding that barrier or something that’s preventing that experience that you’re having in a public place, so, for example, I just mentioned the museum experience in New York City.
That’s something that we have to put ourselves forward and keep letting people know that there are barriers and then they should be letting us know that they’re going to be putting something in action.
>> MEI KENNEDY: Thank you for sharing that.
I’ll move down to the next question.
It’s a good question.
What advice do you have for young deaf gamers who are wanting to immerse themselves and develop their gaming careers?
What advice would you have for them?
>> WENDY DANNELS: Who is this question for?
I’m sorry, I missed —
>> MEI KENNEDY: Anyone.
Whoever feels comfortable answering it.
Yeah, Wendy, sure.
>> WENDY DANNELS: Sure, I have some students who are working with me, and there are four different paths right now that they’re in, so one of them is a mobile application development and then the other one is human-centered computing, robotics.
And another one’s software development.
And then the other one is computer science.
And all of them — all those majors are provided at NTID/RIT, and a lot of the students are very into gaming and developing apps, very excited about this.
And all of them, we offer a degree, master’s degree, there are deaf and hard-of-hearing students Ph.D. that are in the program and they’re working on getting their Ph.D. soon, we’re all looking forward to it.
>> MEI KENNEDY: Wow, that’s amazing.
Does anyone else want to add any advice before we move on to the next question, Mari?
>> MARI KYLE: Sure.
As a game producer I found the thing that’s really been the most helpful for me in my games career has been understanding that I bring a unique perspective to the teams that I’m on that, you know, most other people aren’t able to bring.
And as a young, you know, deaf gaming professional, you can bring that perspective to those teams.
Some teams are starving for the perspective of someone who experiences these games differently and you can give that by being a part of those teams and raising your voice, you know, maybe cold e-mailing developers who make games that you enjoy and just saying these are my thoughts on how you can improve your game.
Or being a part of their discords and being active and saying these are the challenges I’m facing and this is how the game could be made better.
I really, you know, just feel that you can bring a really unique perspective that the gaming industry desperately needs so please raise your voice, get involved in the discords, reach out to the developers, be active and make sure that your voice is heard and that will give you the kind of, you know, an experience that you need to make your way as a gaming professional in the industry.
>> MEI KENNEDY: Wow, that’s good advice, thank you.
So I’ve got another question that’s being asked here.
If I am a newly deaf gamer, what console would you recommend?
(chuckle).
>> CHRIS ROBINSON: I’ll take this one.
Growing up I bought different kinds of games, so I have a PlayStation and I have an Xbox and I have a PC and I have a switch, Nintendo switch.
And really, there are so many different games out there.
It really depends on what you enjoy.
So if you want, like, something fun, you know, maybe like Mario, that’s a popular one, it’s been popular for a while now since I was born so really there’s a lot of different platforms.
It just depends on which way that you feel like you enjoy it.
But one thing I do want to say is even though people — I mean, there are people out there that say the PC, PC master race is, you know, that’s really not true, it’s not all about PCs because all games have — you know, they’re all fun.
You can play it on different things.
If you play on a PC, there are more options so you can modify, you can do mods in the game and you can just make it more your game, your way, especially for accessibility features.
So the abled gamers, try to find some mods that help disabled gamers to be able to play specific games.
So I like it.
I like all of them.
I mean, if you find something that you enjoy and you’re interested in, try it.
And if you’re not sure ’cause you’re still, you know, it’s the game, maybe it’s not a hundred percent accessibility for all people you can just rent a game, like a game fly, you can just rent some games from there, you can check out their library, there are different ways you could do it, you can demo games too.
>> MEI KENNEDY: All right.
So the next question is for Mari regarding the Oculus.
So they’ve said that — I’ve been playing the Oculus Quest game, and something that I don’t like is that it blocks my view while I’m playing the game when I have the headset on and I don’t feel safe with that.
Do you have any development that allows the user to see what’s actually going on without taking off their headset?
The goggles on the headset.
>> MARI KYLE: Yeah.
That’s a great question.
>> MEI KENNEDY: I hope you understand the question.
I don’t have very much experience with it.
>> MARI KYLE: No worries at all.
I totally understand.
I’m sure you can see around me I have a ton of things in my space, including a dog.
So I often have to think about the space around me as I’m playing my VR games.
The things that we’ve built is we do have a pass-through mode where when you’re wearing the headset and you get too close to a boundary, you know, a virtual boundary that you’ve set as your play space, you’re able to see, you know, a camera, into the room around you and you’re really able to actually see the space around you when you get too close to those boundaries which are one helpful aspect.
Another one is that boundary itself is a really great way to set up a space where you feel safe so that the headset knows that when you get close to those boundaries you are no longer in a safe space and it can warn you.
Currently, the way the Quest is set up is you have to outline an area or you can play standing and stand in one space but you have to outline the area that you have of free space, and then when you approach the boundaries of that area you will see a red grid come up around you and when you go past that red grid, instead of seeing the virtual world, you’ll see the real world, the cameras will just show you what’s actually around you so you can actually see what you’re doing.
We have those.
And I recommend that if you feel uncomfortable with the space around you and you’re still not sure if, you know, you’re kind of hitting things, I know I can always get super wrapped up in games, what I like to do is find games that have a seated only mode or standing the only mode which means you won’t be moving around that space and you can play everything from just where you standing and locomote through teleportation and locomote with thumbstick instead of moving around.
You can find in the Oculus store there is a line on the page that will tell you whether or not a game can be played seated only.
>> MEI KENNEDY: That person just added on to that question saying it’s not just for safety but me being deaf I need to see my surroundings for what’s happening, and if, you know, like the house is on fire or if there’s an argument going around or if someone’s coming onto my space.
But the next question being asked is: I’m curious if there’s any award recognition for games in the industry that have gone above and beyond for accessibility at a high-quality level and have a criteria system to give out awards for games accessible for people with disabilities that you’re aware of.
Chris?
>> CHRIS ROBINSON: Yeah.
I wouldn’t say it’s an official award, but it’s — me and Brandon, we did, you know, start doing a video game award.
But it was based on the options in the game.
So it was something called — it was a video game award hosted by a man named Jeff Kaniny.
And it’s a show that they put on and I was really wondering about what was going on because it was focused on disabled gamers and I was telling Brandon, I said, hey, why don’t we do something like this?
Because they didn’t focus on disabled gamers and he said sure why not?
Because we don’t see a lot out there that’s related to accessibility awards, not that much these days so that’s why I was thinking at the end of the year we can do a game award show, before the new year.
So we were thinking, you know, hey, maybe this recent year that just happened, were there any accessibility awards, are there any award winners and things like that.
So we’ll do it again before the end of this year.
And, yeah, I might still have a YouTube link of the last one so I’ll send it to you.
>> MEI KENNEDY: Yeah, and I was just about to ask if there’s a website where you’re posting these videos so that people can come in and watch them.
>> CHRIS ROBINSON: Yeah, I’ve been starting to use my YouTube channel more often these days but I usually use Twitter mainly for all of the stuff that I post.
So I’ll post things on Twitter.
I post things about what I’m working on or videos, that’s sometimes where I stream my games.
I stream my games on a website called Switch so I just put all my gaming experience as a deaf gamer on there.
>> MEI KENNEDY: Yeah, so we’ll definitely get your Twitter to handle and share that.
>> CHRIS ROBINSON: Yeah, sure.
>> MEI KENNEDY: OK.
So next question here: So the next question from the audience is: I’m curious if there’s a game out there that’s directed toward senior citizens 55 and above and if you have any recommendations for games for older people, any thoughts on that?
>> WENDY DANNELS: Yeah, it could be, most games are for all ages so I would say it depends on — there are different skill levels, they all have different motivations or goals behind the game so it’s best if you just pick a game that fits their — their goals and their needs.
>> CHRIS ROBINSON: Yeah, Chris here.
I want to add to that, you’re right, you’re exactly right because most games now offer difficulty options.
And there’s one called story mode which means it’s not super difficult, you don’t have to worry about the action hardness that’s going to cause you to struggle.
It’s more for the story and enjoying that, more so.
>> MEI KENNEDY: All right.
So I know you were just talking about senior citizens.
We’re going to shift to what the next person asked.
They said do you have any game recommendations for new people, maybe inexperienced in the gaming world, and want to start getting into games?
Again, just like you said, there’s a range, it depends on your interest, what motivates you.
And once you identify those, you can look for them and find what fits.
>> CHRIS ROBINSON: Yeah.
I would say the sky’s your limit, there’s no limit, you know.
Try something you like, check it out, start with maybe some YouTube gameplay watch, and see what kind of game it is.
If you like it, try it out.
So I use YouTube most of the time just to make sure it has subtitles in the game, which means I can actually check out the game and play it.
You know, yeah, so if you see something you like, try it out.
>> MEI KENNEDY: All right.
I’m going to move on to — just give me a moment here.
Someone wrote in the comments, it’s related to what Wendy said and the planetarium.
>> WENDY DANNELS: Yes, the planetarium.
>> MEI KENNEDY: So I’ll let you read that comment a little bit later.
Let me actually — let me go down here.
No, there’s another question, I’m trying to find it.
I can’t find it.
All right.
Can everyone tell me your favorite game and what game you hate the most?
(chuckle).
>> CHRIS ROBINSON: Who wants to go first here?
OK, well, I would say — no, Mari, go ahead.
>> MARI KYLE: No, you can go ahead, you go.
>> CHRIS ROBINSON: OK, OK.
So I would say one of the games that I tried really hard to like was Dark Souls.
Dark Souls.
That game is extremely challenging.
I like the concept of it, but it’s hard for me to really get into it.
So that’s one that, you know, I don’t — I don’t mean I hate it, but it’s just like, eh, I feel like it’s a barrier for me.
And then one of my favorites, pssst, I would say 80% of the games, just widely.
Here I’m seeing a comment, red dead redemption, that’s a good game, a cowboy game, I really like that.
I like most fighting games, you can see on my shirt, it says combo breaker.
I like games that are more tournament-style.
And I got this game — shirt in a tournament in Chicago, actually.
>> Mari raised her hand, she wanted to make a comment as well, go ahead.
>> MARI KYLE: Sure.
I actually really love the game account hit Man which actually has a lot of accessibility features which I didn’t expect when I first played the game.
The game has this one feature that I really love where you can turn on kind of this as assassin mode where the whole world is grayed out and you can focus on the key characters you are meant to assassinate and the secret places that you can go.
And for someone like me who has difficulty parsing a lot of movement and a lot of things at the same time, having that mode is really helpful and allows me to slowly, you know, plan out and go through the game as I’d like.
So I feel like that’s probably my favorite experience so far.
>> MEI KENNEDY: We’ve got so many more questions here and we’re running out of time.
I just want to let the panel have an opportunity to share any last words before we wrap up the session.
Yeah, Wendy, go ahead.
>> WENDY DANNELS: This is Wendy.
I actually posted in the chat my contact information.
You can contact me anytime.
I have to admit, I’m not good at watching and signing, and typing at the same time.
So you can understand why I may not answer some of them.
But I want to say an important thing to mention is that we look forward to the future, they’re going to have a 360 immersive experience where you can look all around you.
And if there are two or three people that are standing in your 360 immersive experience space, most people are going to be — most people are going to be able to hear all the people that are in that space but as a deaf person you’re not going to be able to know where those people are standing in that 360 immersive experience so there’s a group of people that are working hard to develop some recommendations for this type of approach.
For example, you can show an arrow, and that way you know to look over.
By the time you actually look over, they might have finished their conversation and another person is starting a conversation behind you.
Let’s say there are three people behind you you’ll have to glance over trying to figure out which one’s having that conversation.
So W3C is an international-level standardized organization working on developing guidelines for a specific group called Immersive Captions.
And what we’re doing, people from Gallaudet University, RIT, Facebook, and Google, are all involved in that discussion to make the 360-degree experience accessible to everybody.
And that’s one of the many things that we’re doing together.
So — but thank you for having me here.
>> MEI KENNEDY: Yeah, and thank you for joining us.
Mari, Chris, any last comments before we wrap up?
>> MARI KYLE: Sure, yeah.
You know, if you’re a game developer, I highly recommend, you know, you play through your experience with the sound off.
You play through your experience only seated.
You play through with color turned off on a monochromatic scale.
Play through all these different ways and if you find you can’t play your games with these settings, then you need to make changes and make your game more accessible.
If you’re a player and interested in getting involved in games and making games more accessible raise your voice because we definitely want to hear from you and we definitely want to make sure that you have a good experience in the games.
Again, if there’s an issue where you’re unable to play a game, it’s not, you know, it’s not you, it’s the game developer’s responsibility to make that into a better game and to take these considerations.
So raise your voice, and if you’re a developer consider everyone.
>> MEI KENNEDY: Sure, absolutely.
Chris, any last remarks?
>> CHRIS ROBINSON: Chris here.
Yeah, I just wanted to add that don’t be afraid to use your voice whether you’re deaf or not.
You know, when we speak up, someone out there is going to recognize that and try and get in touch with you and try to help raise your voice as well.
And that’s going to help a lot.
But it’s not going to happen instantaneously.
It’s going to take time.
I’ve been doing this for almost seven years now, and nothing really happened until my second year.
So be patient and keep motivating yourself.
You know, we want to change this gaming industry, we want to make it more accessible for myself and our friends and so it’s important to try to keep an open mind and stay motivated.
Be patient.
>> MEI KENNEDY: Thank you.
Wow, this is a wonderful panel.
So many questions.
This is a hot topic and I’m sure a lot of people are curious and, of course, in the future, we’re going to be changing gaming and changing experience.
Thank you for sharing your knowledge and your experiences.
Have a wonderful rest of your afternoon.
Bye.
>> CHRIS ROBINSON: Thank you, thank you, take care, everyone.
>> MEI KENNEDY: You, too, bye.
>> MARI KYLE: Thank you.
Diversity, Equity and Inclusion in Information and Communication Technology
Rogelio Fernández Mota, Johnny Reininger, Jr., and Opeoluwa Sotonwa
Transcript
>> OPEOLUWA SOTONWA: I’m going to wait for one more minute so we’re going to wait for someone else to join us before we start moving with this. OK.
All right.
This is Opeoluwa.
Good afternoon.
How’s everyone?
Hi, this is Opeoluwa Sotonwa.
This is my sign name.
It’s an O.
Before we start, I want to make sure we have the interpreters in place, we have CART in place and all the accessibility options.
Yes?
OK.
All right.
Wonderful, thank you.
I’m a black male.
I’m bald.
I’m wearing a dark gray suit and an orange shirt.
And I’m sitting on a blue chair.
And I’m going to turn it over to the rest of the panelists to introduce themselves.
One is going to be a little bit late but they’ll be joining soon so we’re going to turn it over to Rogelio.
>> ROGELIO FERNÁNDEZ MOTA: Hi, my name is Rogelio Fernández.
This is my name sign, it’s an R on my chest, I’m wearing a gray shirt, I’m seated, with a picture in the background.
Native American picture and an orange background.
>> OPEOLUWA SOTONWA: Thank you.
And Johnny?
>> JOHNNY REININGER, JR.: Hi, I’m Johnny Reininger Jr.
This is my name sign.
I’m wearing a bright orange shirt with stripes, gray stripes and I’m represented Native American or actually from a school of survivors whose family haven’t found their children so I’m definitely working today in honor of them and I want to honor all the tribal recovery acts that are going on right now.
I have brown hair, it’s long.
I’m wearing glasses.
I have light skin.
I’m indigenous.
>> OPEOLUWA SOTONWA: This is Ope.
Thank you, Johnny.
So in this panel we’re going to discuss diversity and equity and, of course, this includes technology as well.
I’m going to be starting with some of your experiences that you’ve had with current technology advances, particularly with COVID.
Of course it’s forced us to learn a new way of doing things.
And most interpreters are working remotely.
And life is a lot different now than it has — of course we’re hosting this through Zoom and three years ago we probably wouldn’t even think of this possibility.
So I’m wondering, do the members in your community feel like they’re catching up with current technology?
Or is there some things that they are still working on?
>> JOHNNY REININGER, JR.: Hi, this is Johnny.
I wanted to let you know that I feel like in rural areas or poorer areas you know that VRS right now, of course, is nationwide.
And language and even dial elects, vocabulary and dialects vary all over the place.
I’m an indigenous person.
I sign in my label — sign in my native language.
And there are certain things, certain dialects, so I may be giving different terminology or words that interpreters just don’t understand.
And so it feels — we need to set up, I feel like we need to set up something that’s regional so not having interpreters that are all the way from California or New York because they may not be able to recognize the different tribal languages and dialects that we use.
So that’s one thing that I wanted to put out there.
There are also several other stories that I wanted to share from our group, our indigenous group that we’re working as well as our Turtle Island Hantuk group that we’re working with, this is the sign for it.
It’s not my work, I’m not going to take any ownership but as a group, we worked on it and I’m willing to share and, of course, will answer questions when people start asking more questions.
>> OPEOLUWA SOTONWA: Yes, I would like to ask.
I definitely have some questions regarding tribal land but I’m going to switch it over — turn it over to Rogelio and he can share some of his comments.
>> ROGELIO FERNÁNDEZ MOTA: Yes, absolutely, technology has improved over the years.
And I think with all of this happening, it makes us start thinking about what we have to do that — right by our community in order for all of us to catch up.
So, for example, when you’re thinking about platforms, of course, they’re becoming more user-friendly and we’re able to reach out to customer service and a chatroom, we can do Twitter, we have chats that we can do on the platform without calling through VRS so there are a lot more options than we have but at the same time, there’s a lot of issues that come up that can be frustrating.
So, for example, we don’t have Latino representatives on billboards, websites, and videos marketing as — and things like that, there’s nobody — there’s no representation out there where we can feel like we relate with other people or relate with even the issues that are out there.
>> CHRIS “PHOENIX” ROBINSON: So the information doesn’t reflect what I’ve experienced.
So here’s an example.
There’s a lot of hearing parents that are out there who have the opportunity, because different companies, can improve their marketing.
There are hearing parents out there who can get information.
And they go, OK, my deaf child can benefit from this information and they can hand it over in Spanish, in English, or in sign language.
And even do it at an earlier age.
This is a lot better than waiting till a lot longer and there are a lot of kids who haven’t learned these things until they’re a lot older particularly about technology and we want to see that gap be filled so these kids can learn at a younger age, I think that’s really important.
>> OPEOLUWA SOTONWA: Thank you.
I want to add to that most recent comment.
It seems that often technological companies and in the industry, invent new products and technology with the purpose of supporting people and supporting their needs especially but they don’t focus on groups with special needs and Latino to try and put things into Spanish like you just mentioned with different resources.
So I’m wondering if these companies that are out there can start thinking about how they can change their — their priorities, they’re DEI priorities so they can include other people with life experiences especially when it comes to using these products instead of not being aware of it and having to include them later.
Do you want to share some thoughts regarding that?
>> ROGELIO FERNÁNDEZ MOTA: Sure I’ll be more than happy.
When I was a young kid, I remember looking at the TTY as well as captions.
Back in the day, it was this big machine, and I remember talking to my dad, and my parents didn’t know English, so there was all this information, they didn’t know how much they would have to pay, they had absolutely no idea so they had to depend on my younger sister as well as my older sister for translation of what that document said.
And so that was a missed opportunity.
Can you imagine how many Hispanic parents out there are getting something and saying, no, we’re not going to get this?
And, actually, this product may be a benefit for the kid or the family and it can really improve their reading and writing, language comprehension, accessing things that are on the TV.
So from a young age for me, from now until today, I’ve been involved in the VRS industry for 17 years, and one thing that I noticed is there’s a lot of companies that aren’t really invested in putting out flyers or doing marketing in Spanish.
So one of the things is we went over to a trade show, I remember that, and we had a booth.
And it was in a strong — it was in a Hispanic community.
And I said why don’t we have these marketing materials in Spanish and they said, oh, we don’t have the budget for it.
It’s questionable.
>> OPEOLUWA SOTONWA: Yeah.
So we’re going to transfer — we’re going to turn it over to Johnny.
I know you mentioned the tribal lands.
And I know that sometimes it becomes a struggle to have accessibility there for people who live on the tribal lands.
So the community needs to experience, is experiencing a gap in needs, particularly when you talk about broadband Internet, emergency services like 911 services, and a variety of other things.
Can you share with us some of the experiences and some of your thoughts?
>> JOHNNY REININGER, JR.: Yes, I’ve experienced several of those issues that are on our reservation.
We call it Indian country.
Most of us don’t have broadband, we’re very limited in what we have access to.
And so in the city, there’s a lot of options that they have.
They have high-speed Internet.
But we’re limited here.
So the download time is really bad.
The speed, some of them are getting only 2% or three megabytes.
It’s very slow.
It’s very low quality.
And we can’t meet the needs especially when you need to communicate over a videophone.
A videophone would require about four to 10 megabytes to be able to keep up with the sign language and make sure that it’s smooth without any distractions.
So that’s one thing that relates to accessibility.
And, of course, we have TDDs that we’re still using to make phone calls.
So videophones are provided on the reservations.
And also the information that’s being passed on from the FCC through the tribal government is not effective.
And federally, we should recognize some type of collaboration, but because we’re not getting a lot of information from the FCC, the tribal government, about our rights and our needs for accessibility, it’s not happening.
Also, there’s a lot of us who, are financially restricted.
And we just don’t have a lot of money.
So we call that a Lifeline, right?
Some of us have experienced this already.
Having a Lifeline.
But even Lifeline has limited megabytes that they provide.
So most of the time it doesn’t even qualify, you can’t even get video capacity for VRS or VRL all the time, it’s limited what you can get, this is what we’re struggling with too.
If Lifeline upped the services and met our needs I would consider that to be equivalent but as of now, Lifeline is not considered equivalent services, not for any of us who are financially restricted.
>> OPEOLUWA SOTONWA: This is Ope here.
I agree with you, Lifeline is definitely not equivalent, it is limited and it’s not keeping up with the current technology needs.
Technology is requiring high-speed Internet and a variety of other things for us to work remotely and there’s a capacity with Lifeline and that’s something that hopefully the FCC and other policymakers will take a look at.
It’s not only that, too, but there are people out there that are living in islands or U.S. territories, Americans as well, and often, whenever policymakers — they’re more focused on the 50 states alone.
Which leads to neglecting groups of people that are out there.
And so I wanted to move this conversation, actually, to a different topic.
I want to talk about economic justice and I’ll be showing you a brief minute-long film from yesterday, where Dr. Lea Cox made a presentation on racism, audism and there were a couple of other points that were made.
So if you don’t mind turning off your videos so that way we can show the movie.
Thank you.
(See video captions).
>> OPEOLUWA SOTONWA: That clip is so powerful.
We don’t really think about those little things, those nuances and it makes us realize that we have been fighting against racism and we have been fighting for equality, but still the system has been failing us.
The system still recognizes that sometimes the policy has been focusing on justice, but we have to think about what the community feels.
The community feels that impact.
So these are the type of questions that, you know, you should be asking yourselves.
>> ROGELIO FERNÁNDEZ MOTA: Yeah, I can definitely see that.
You know, I’ve been doing that in the internship for 17 years, and many — but, I’m sorry, VRS company so I’ve been doing that for 17 years, and really the investors, that’s what’s hurting us in the end, our community suffers the most from that.
I mean, sometimes it’s intentional, sometimes it’s not, but oftentimes it is intentional, unfortunately.
I mean, with that mindset, that’s where the system breaks down.
For example, like equipment, like an iPhone, as you see right here, my iPhone.
You know, a lot of people cannot afford extra space on their iPhone so when they’re streaming or downloading photos or anything like that their space is limited.
While the app is increasingly improving itself, where we have to switch out to a different iPhone, you know, upgrade to a better iPhone, and we don’t have that privilege to get that.
And the iPhone itself is actually fragile.
The quality itself can be, it can deteriorate over time.
So there are a lot of things with that where we will constantly face challenges.
I think we need to approach a different type of discussion with the companies, to make sure that things are affordable, where phones can be at a reduced price, absolutely.
>> OPEOLUWA SOTONWA: Thank you.
Johnny?
>> JOHNNY REININGER, JR.: I think also with the company, I think they need to kind of attract a diversity of the deaf population.
I think they need to think about that when they’re developing.
Because right now with the video on the Webcam they have an automatic setup where it kind of can contrast itself.
But most companies typically when they initially develop, you know, typically towards the white population, and it’s made by the white population, so automatically it doesn’t fit for people who are darker-skinned.
So they actually, the people who are POT, have to adjust their lighting and kind of have to fit what the company made.
So if the company initially just developed that to, you know, fit all types of people during their developing process, and it’s not only that, I mean, there’s a good example, artificial intelligence.
AI nowadays, especially they have like automatic hands where they can have facial recognition and it has already been developed by, again, white men.
And they actually, in that face recognition, they actually miss people who are POC, so already that system is failing that population and if they just simply had developed that software to include all types of people, that’s just one simple fix.
I think that’s what I wanted to piggyback on that comment.
>> OPEOLUWA SOTONWA: All right.
So one of the things, I know you just mentioned it and some other people are commenting, too, in the chatroom is capitalism.
And it profits, profits are the number one priority for companies.
The second thing that is true is the implicit bias, right?
>> Definitely.
>> OPEOLUWA SOTONWA: We’re talking about implicit bias that is not even recognized within the system.
And one of the biggest reasons why we encourage everybody, whenever they’re developing products or technology, they need to hire more people of color, more diverse people with diverse experiences.
Because your lifestyle experiences are different from their life experiences.
And we can reframe it by introducing different people and it helps them reframe their own lens.
And it’s not through just every day — right, it’s through everyday needs from people of color, sharing their experiences.
And, of course, you can say, when we’re talking about economical justice, that will lead — we think that will lead to resolving that but sometimes it can lead to also tokenism.
Where they’re just putting somebody, like a black person, in that space to represent that community, but the structure and the system that’s in place at the time is already designed, that means the person is just there, but they’re not able to be effective for their community.
So we really have to work on breaking down and dismantling and unpacking those systems that are in place, reviewing the things that are in place just to make sure that everything is appropriate.
And we have an equitable system for everyone that includes people’s experiences.
That would be helpful.
What do you think about that?
What are your thoughts about that?
>> ROGELIO FERNÁNDEZ MOTA: I definitely agree with that comment.
For example, when we have discussions about the policy, oftentimes the policy is made with a white lens.
If you notice, with any type of product release, my personal opinion, and it should be, we should have a more consistent and similar design.
So individuals with different types of — different types — different types of experiences than the beta tester can look at that and have feedback from that.
>> 95% of the time we ask for feedback, but we do feel like we suffer from tokenism because they’ve decided way back then.
>> JOHNNY REININGER, JR.: I just recently applied to be a beta tester and I noticed they didn’t even ask me my identity.
I’m deaf.
They asked me if I had a disability, but they didn’t ask me if I was Native American, indigenous, was I black.
They said they were selected at random just maybe by names, maybe they see my name, obviously, my name looks white, Johnny Reininger, it’s from my father, my stepfather, so maybe that’s the reason why they’re picking that up, is it based on names?
That’s something that — they need to include more diversity.
I’m sorry, I have a cat on the screen, by the way.
But, yeah, they need to include more diversity within their processes and they need to when they’re hiring beta testers.
>> OPEOLUWA SOTONWA: Perfect.
I wanted to go ahead and give our audience an opportunity to ask questions this is taught audience, you have any comments or questions, we’ll reach out in the chat and sign those questions out.
Right now I’m curious if you guys have some thoughts or advice for tech companies on how to improve their DEI policy plan.
And not just a word but we want to put that into action.
>> JOHNNY REININGER, JR.: This is Johnny.
I did for VRS industries that they should try to refer interpreters who are more regional in the area and that way they can understand the regional dialects.
That’s really important.
Because if they’re getting interpreters from New York, they may interpret something wrong.
Or if a person identifies themselves as a doctor and perhaps they’re saying some things that are not correct on the tribal group and they’re on the phone and the other person is a doctor, too, and there’s a communication barrier and misunderstandings because the person’s going, oh, this is not correct, and it’s not the right name, and we feel like there needs to be more of a regional setting.
Of course, we want to be flexible but having interpreters from the regional setting would just create more of a way that interpreters can be more accurate, they can say the more appropriate terms, and that way they could identify better with that community.
>> ROGELIO FERNÁNDEZ MOTA: I think we have talked about the DEI issues and I think we just need to be mindful, DEI, yeah, unfortunately, that’s a bus word, right?
So we need to challenge ourselves.
We need to do what is right for our community.
We need to do what’s right with individuals with intersectional identities and see what they can benefit from the community because they’re not.
I think my suggestion for different companies is to hire more BIPOC people and to train them and not to set them up for failure automatically and that change starts now.
When you start working on projects it needs to be right in the start.
We don’t need to call POCs for feedback after the thought, no, we need to have them at the table with us at the very beginning.
And that needs to stop.
I encourage anyone who attends — I encourage everyone to attend their local and regional seminars that are, on topics of POC topics, and I think to get out of your comfort zone and meet different types of people, I think that’s really going to help you open up your mind.
And have these people lead, not be in the background and not just have, you know, the company the face of the company type effort.
I think you need to have people who are really involved from a POC standpoint.
I think that’s it right now.
>> OPEOLUWA SOTONWA: Thank you.
So looking at the Q&As and a couple of people have asked questions, one person said why don’t we have any hard-of-hearing people who are on the panel?
And that’s — yeah, that’s a missed opportunity.
So I know TDI will probably make note of that now and I think for future conferences we will definitely plan to add that so thank you for putting that out.
Secondly, I wanted to mention another question that I think this was to Rogelio, why aren’t you using spatial sign language in your conversation today, or Spanish sign language in your conversations today?
So I just wanted to see what your thoughts were about that.
>> ROGELIO FERNÁNDEZ MOTA: (No sound).
>> OPEOLUWA SOTONWA: OK, thank you for sharing that with us.
And certainly, we have to think about people that are providing accessibility to the question, yeah, I operate Mano Translations, and we had to do Spanish to English.
For example, we do text in articles, we’ll do translation in the medical field, in hospitals, in education, and things like that, we do a lot of translation work.
We also do consulting so D and I train and we hire a lot of different people to do different projects.
>> OPEOLUWA SOTONWA: Thank you.
Johnny, do you want to share some of your thoughts too?
>> JOHNNY REININGER, JR.: Really, I work with a group, a collective group of indigenous deaf people, and we actually just started Turtle Island, and that focuses on a variety of issues related to our indigenous people.
So as of now, many of us have suffered from language deprivation at a very young age and that’s been a really big struggle for us.
Language deprivation is also culture deprivation.
So that’s where the communication breaks down, especially with our hearing families and even with the deaf community.
There’s lacking of, you know, the culture in education.
So when they go into a mainstream school setting and then get placed into a deaf school they finally, you know, receive their language, but the opportunity is already lost to be connected with their culture because it was deprived at such a young age.
So we try to really educate on a variety of tribes because the issues are not similar for each tribe.
For instance, there’s one issue, like with chiefs and different types of leaders in the tribe so people do recognize if you’re a chief, the belief is that, oh, I’m Indian?
OK.
They automatically think that I have a chief in my tribe which for my tribe specifically we don’t have chiefs so the identity is missed on who I am as an indigenous person.
So that’s just one example.
There’s just a variety of different things.
So the idea is that we need to educate the community on what indigenous is and how many different identities are within that.
And there are different cultures that need to be picked up so we can have a stronger framework of what indigenous are.
>> OPEOLUWA SOTONWA: Thank you.
This is a lot.
Because it assumes — assumptions lead to people having the wrong idea about their identity and I think it’s best to ask, right, just to ask the person, how do you want to be identified?
What do you want to be called?
And sometimes people don’t want to be identified in that way and that’s OK.
They may have their own personal reasons behind that.
They may have had a traumatic experience.
They may have felt oppressed.
They may want to hide and that’s OK.
The thing I want to stress is anybody who wants to be identified in the correct way, we give them that respect too.
And we ask.
OK.
Let me go ahead and check the Q&A for some more questions.
I’m looking now.
One of the questions in the Q&A says could you give some ideas about new products, a new technology that has already been developed but hasn’t but doesn’t meet BIPOC community needs, or disenfranchised, marginalized communities?
>> JOHNNY REININGER, JR.: Johnny here.
Really, I would probably say technology relates to the FCC, like I said, the video automated system doesn’t fit BIPOC communities.
(No sound.
>> INTERPRETER: He was just saying about the settings, someone has black skin.
>> JOHNNY REININGER, JR.: As I mentioned, with the settings, it doesn’t automatically have that contrast to fit the BIPOC community so that is one first step that we need to take.
>> OPEOLUWA SOTONWA: Thank you.
Rogelio, do you want to add something?
>> ROGELIO FERNÁNDEZ MOTA: I’ve been involved in the VRS industry, and obviously my family, their first language is Spanish.
So CFDVS became ZVRS, and at the time I remember asking them, telling them that I needed to have our — we needed to have ownership, meaning we needed to have our own platform, we needed to have our own way of advertising and marketing.
They listened and they gave us the power to set up whole VRS and we set that up with Spanish representatives, and some VRS companies ended up the following suit, and they were scrambling to try to get this developed, and I think that’s an important piece for us to consider, of course, ever since I left and, of course, things didn’t gain any more traction because I was the one who was in the company, I was the one that was barking and letting them know to do this and, you know, our people, our community, we requested, we challenged things for good reasons.
We don’t — we’re not just making things up just to do it, to stir up trouble, we want to have a better product, we want to have better services that actually benefit us and benefit the company too.
If the company really wants to improve and if they really want to gain some customer base, the — the profit, they want to focus on a profit margin or whatever they have taught the right thing by making sure we’re all included.
>> OPEOLUWA SOTONWA: Yeah, really the main focus is a privilege, especially a white man-owned company.
I think the opportunity to really include, you know, BIPOC communities, the opportunity gets taken by the white, you know, white-owned — white-man-owned companies.
So I think that the opportunity is lacking there.
And it’s — I question myself, like, where’s my opportunity, where’s my American dream?
So I’m really hoping that you know, those people, the white men that own these companies are sitting in and watching this and thinking to themselves how can we work with BIPOC communities and how can we grow in a better way to make a better impact on the community?
Because you and I, our life experiences, every single day we live, see and breathe it.
>> ROGELIO FERNÁNDEZ MOTA: Oh, yeah.
>> OPEOLUWA SOTONWA: We know our community.
We know it like the back of our hand.
And that’s what they need to do is just innovate and help grow with it.
But are they really going to do it?
Are they really going to mean what they say?
>> ROGELIO FERNÁNDEZ MOTA: I just want to add to that, too, it’s not just VRS companies, it’s a lot of different companies that they usually get, you know, the same Black History Month or Latinx Month or Native American Month, if they just celebrate that and recognize that it’s not going to work just by the month they have to do it year-round.
That’s what we feel.
That’s when we’re going to feel like when we’re included and welcomed to use their services, their products.
>> OPEOLUWA SOTONWA: Yeah, and so that doesn’t really represent the whole thing, you know, it has to have some action.
Johnny, did you want to add something?
>> JOHNNY REININGER, JR.: Yeah, not really, no.
I just — I don’t really have anything to add to that.
>> OPEOLUWA SOTONWA: OK, thank you, we’ll go back to the Q&A and the chat.
So Dana says — so there has to be a better approach to a diversity of people, especially — have to attract and have a better approach to all signers, not just a small spectrum but really the whole spectrum, especially even late signers, late learner signers, we just need to have a quality to have accessibility for all deaf and hard-of-hearing people on the spectrum and really, there’s a lot of people on that spectrum who have different needs sometimes they sign as needed, sometimes people-mouth what they need, sometimes they only need captions.
There are just so many different needs of deaf people on that range and it’s not all the same.
There’s one question that’s related to the hard-of-hearing community.
I’m curious, are you both familiar with the hard-of-hearing community and what their needs are or people who don’t sign?
Because I know technology can be really frustrating from that standpoint.
>> ROGELIO FERNÁNDEZ MOTA: Yep, for example, I know there’s a lot of people in the Hispanic community who can’t speak.
There’s a lot of people who don’t know that they can use VCO or CapTel to the community with their loved ones, friends, businesses, a lot of them just don’t know this.
And that’s a piece that needs to be emphasized more in marketing and we need to teach people how to use CapTel, VCO.
Yeah.
Yeah.
I think it’s really important that we need to, we definitely need to focus on that.
>> OPEOLUWA SOTONWA: Yeah, Rogelio, I’m actually curious, for that equipment, the technology equipment, I noticed that most of the marketing is in English.
Do you think that maybe there is a reason why those who are in charge are ignorant about it?
>> ROGELIO FERNÁNDEZ MOTA: Yeah, we need more Hispanic representation to show how to use VCO, VRS, CapTel, all the products just to show how they operate, definitely.
>> OPEOLUWA SOTONWA: Thank you, thank you.
Johnny, did you want to add anything?
>> JOHNNY REININGER, JR.: Yeah, yeah, I just wanted to mention, as I mentioned before, ASR, I just learned this room, ASR, automated speech recognition, a lot of people who have accents, deaf accent or cultural accents, these accents will also skew how ASR will recognize and change our language.
And some of the times it will include vocabulary that I mentioned, too, like, some of the vocabularies that I may say Markul or people may wonder how to pronounce that or how to say that Mavoski in the English language it may translate it to that but I maybe pronounce it different when I say Mascul, there’s a different way for them to say that because it doesn’t recognize that word.
One interesting story from our Turtle Island group, there were some regional areas, especially on the — who love VRS on the islands, they had a different type of accent that they would use compared to some of the other people, let’s say Mexican people or people in Canada or people in California, there are different accents all over the place.
And so there are accent variations.
And some of the family members may not understand that accent.
So one of the things I want to stress again, it’s important to have more regional representation, especially when it comes to translation within the VRS industry.
Also, we need to look at videophones, no, what do you call that?
It’s the — the IP — it’s a video interpreter, VI, that’s what it’s called, have the I, the VIs they need to be more regional-based instead of coming from all over the place.
>> ROGELIO FERNÁNDEZ MOTA: I just wanted to add something to that.
I wanted to talk about regional signing and also with regional dialects and even with different states and different countries, I’ve noticed when VRS was initially established, the VRS company itself actually had a very high expectation of how they hired their interpreters, they had to be certified.
But as the VRS rate dropped, they became more lenient on their hiring screening process for interpreters.
So they were actually hiring interpreters straight from graduating programs so they were novice interpreters.
So the quality deteriorated rapidly.
So, yeah, the ability to interpret and to receive a different types of regional signs has gone down, yeah, I’ve noticed the quality has gone down for sure.
>> OPEOLUWA SOTONWA: I want to share something with you too.
Whenever I first moved here to America, I moved from Nigeria, and I liked to, you know, sometimes I’d speak for myself, I’d sign as well, and I lived in the D.C. area, and I would speak for myself, most people understood my accent, it’s a thick accent.
The challenge for me was moving into another part of the country.
So Missouri and Kentucky and people there wouldn’t understand me.
And I would feel somewhat deflated.
And then sometimes even with my signing, I would use a different sign for something, it wasn’t maybe very ASL.
And it would include some people who, you know, it would include some VSL, too, some sign language, and that led to a whole other issue within our community where there’s a feeling that our freedom to express ourselves with language that we’re most comfortable with is not there.
So I wanted to know what your experiences are with that within your community.
>> JOHNNY REININGER, JR.: Yeah, I mean, we experience the same thing, Johnny here, by the way, especially for indigenous deaf and hard-of-hearing people included.
You know, there are a variety of accents and a variety of signing accents, as you would call it.
So there are the more traditional indigenous signing styles.
You know, like I just sign people and then this sign where you have your index finger and thumb signing people, there are different ways to sign people and I’m not a huge fan of that but anyway so would interpreters themselves be able to catch that, would ASR be able to recognize that?
I’m not sure.
Just because of the variety that’s included.
>> ROGELIO FERNÁNDEZ MOTA: This is Rogelio.
In our Hispanic community, we’ve experienced discrimination.
People knocking us for our signing.
For example, Chile, this is our sign for it.
In the Latino community, this is what we want to sign but other people say, no, you have to sign it this specific way, it’s ASL, a white-dominated sign and that’s not how we sign it so that’s just one example and there’s a lot of regional dialects and we’re constantly getting criticized for these ways we’re signing things.
>> OPEOLUWA SOTONWA: I’m hoping people who are watching this right now can really — just recognize that there is a huge bias.
And we have lived with discrimination.
And it’s subtle discrimination as well, you know.
We have to live in a hearing world, and our community can do better, you know.
I’m going to look at the Q&A just a little bit more.
Hang on one second.
One question is: Many people aren’t sure how they become good allies to everyone, so including us in the community of, you know, deaf people.
So maybe there’s not familiarity with our community, so they’re not sure on how to respond and how that response can help support our community.
So there was a suggestion, there’s a suggestion on a radical, a radical book, you can actually purchase this book, so this suggests that Jim himself, a CVI board president.
>> ROGELIO FERNÁNDEZ MOTA: Yeah.
Can I —
>> OPEOLUWA SOTONWA: Oh, OK, go ahead.
>> ROGELIO FERNÁNDEZ MOTA: OK.
Often we think by just reading a book that we automatically improve our unpacking or our understanding of ourselves it’s not going to happen.
We need to as we intermingle with different groups of people and from there you’ll get called out and you’ll get called out in different ways.
There are different levels.
And you’ll automatically start improving.
So I encourage you just to do more, not just read a book.
>> OPEOLUWA SOTONWA: Right, I agree, I think that’s also part of what Leah Cox was saying, and what she was sharing yesterday is that there’s not enough resistance.
We have to act on that resistance.
We have to be antiracists.
So that is the biggest thing, is to interact with different groups of people who have experienced discrimination firsthand.
Thank you, Rogelio, for that comment.
I think there was another question of — let’s see.
So COVID has really forced people to kind of integrate into a more technological sense, so using Zoom.
So speaking of, with telehealth and that system, do you think that there are more users, especially for BIPOC interpreters, are they more available during this time, where there are more BIPOC customers?
Or are they just limited to interpreters that are state-based?
You know, technology really unleashes a plethora of possibilities of allowing any type of interpreter to work.
And we can really enhance that experience for consumers to accommodate, to have a more appropriate fit for consumers.
Especially virtually.
Rather than face-to-face.
Does that make sense?
I mean, through technology?
Isn’t that an easier way to access?
>> JOHNNY REININGER, JR.: So an example with Turtle Island, we have interpreters, we have an interpreter protocol for indigenous people.
And what we do is have a list of guidelines that we recommend for interpreters.
And I doubt there are many interpreters who would even understand that.
I mean, we’re diverse.
We come from different tribes, we come from even different beliefs.
We have different sacred religions.
And concepts that we hold.
There’s a lot of differences between all of us and a lot of interpreters don’t understand that.
And so when they come to interpret, even with VRI, let’s say a VRI setting how would they even know, they’ve never been here so they don’t have that experience and there’s not that many indigenous who would even be in their state so how would they of experience what it’s like to live in our shoes?
So it’s impossible.
Living in this state, in Oklahoma, this is where I’m at right now, in the state, and I understand it, I have walked through our shoes.
You know, if you live here, those people could be better allies for someone who is interpreting instead of someone from another state who has never understood what it’s like.
>> OPEOLUWA SOTONWA: Thank you, thank you so much.
Rogelio, did you want to add to that?
>> ROGELIO FERNÁNDEZ MOTA: Yeah, I wanted to answer.
I actually have two different answers that I wanted to provide for that.
So, yes, technology is improving which means more interpreters can work from home and that means the pool of interpreters are, it’s definitely bigger which is a great thing for qualified interpreters but another thing is the interpreter training programs, the curriculum, the training that goes into it, the public, the pipeline that’s out there for years and years, it takes years and years, so the ability to recruit BIPOC interpreters is not in place yet.
Let me give you an example, my sisters sign really good and recently I would say before I set up my business my two sisters said, oh, I didn’t realize I could become an interpreter or a translator and I said, yes, of course.
That hit me hard when they said that.
That means they’ve never had a recruiter, a marketer, anybody out there, anything visible for them to say I could be an interpreter at a young age and they didn’t even know.
That was another another piece of feedback that I have for different companies out there for interpreter training programs make sure your curriculum is revised so it can accommodate and fit BIPOC people and so they can recruit more BIPOC interpreters for us.
>> OPEOLUWA SOTONWA: Thank you so much for that.
Yeah, really, it’s the pool of interpreters, it is for BIPOC interpreters is very limited.
I know a few black interpreters themselves have been in the work of developing a dictionary for BIPOC interpreters specifically all around.
So it kind of gives that access for BIPOC interpreters to see who they need to network with and to kind of gain some jargon and knowledge to, you know, set themselves up for success.
I’m actually looking at the time.
We have five minutes left.
Do you have any final comments or anything to share before we wrap this up?
Johnny, how about you?
>> JOHNNY REININGER, JR.: All right.
Stories are really important.
Especially in the process, of just developing policies.
Really, stories are vital.
Let me give you an example.
So indigenous people, a long time ago, already had a lot of technology.
We had smoke signals that were developed.
That would be able to signal people and share information.
It was already there a long time ago at the time.
And those stories still translate today.
What’s so important is that knowledge base and that understanding of the story.
And you start listening to those and it will start improving your product and it will improve your services.
It will also start — allow you to develop products in the development process.
If you don’t gather all the stories beforehand and you think everything is fine and you start developing your product for your customers, and this is just not one story, this is a variety of stories that you have to include to make sure that you’re considering all diverse perspectives.
>> ROGELIO FERNÁNDEZ MOTA: Yeah.
I understand that, you know, it’s — you know, companies competing with each other, that’s important, I understand that from a business standpoint.
What I think I want to ask companies to do is to do the right thing by the community, to give back to the community, and start by hiring BIPOC people.
I think it really just starts into the hiring process, really.
I think we have to be consistent with — just be consistent with your efforts is really what I want to say.
>> OPEOLUWA SOTONWA: Thank you, thank you to both of you for sharing your knowledge, your experience, and as well as representing your community.
I’m sure there’s a lot of people here who are in the community who are learning and I’ve learned new things and I’m sure they’re learning about how they can support the community and how they can become better allies, not only performative, doing performative allyship but also getting engaged.
I want to thank our audience for joining us, especially with all the comments you made and the questions you made in the chat.
I wish I could have time to answer all the questions that are out there.
Of course, we have a limited amount of time.
I also wanted to thank all the sponsors who have made this conference happen.
We couldn’t have done it without you, here at TDI, so thank you, thank you for everyone who joined us and I hope you all continue to enjoy your day.
>> ROGELIO FERNÁNDEZ MOTA: Thank you for the interpreters, translators, and thank you for attending this event. This is Rogelio.
Bye, everybody.
>> JOHNNY REININGER, JR.: Thank you.
>> All right, bye.
Connecting DHH to Broadband
Zachary Bastain, Shellie Blakeney, Sarah Leggin, Corian Zacher, and CM Boryslawskyj
Transcript
>> CM BORYSLAWSKYJ: All right, well, welcome, welcome to this panel.
OK, are you guys ready?
My name is CM and my last name is Boryslawskyj.
So I’m the board member of the northeast regional area.
And I’m also a treasurer as well.
I want to take a moment to describe myself, I have short curly black hair, I’m wearing a sleeveless black tank top and then I have a silver pin at the front.
I have some shelves behind me, I have some paintings, some books. I’m also white.
Go ahead and introduce yourself, your name, your role and we can go from there.
Any volunteers?
Who wants to go first?
>> ZACHARY BASTAIN: Hello, my name is Zachary Bastain, I’m a white man wearing glasses with brown hair pulled back and a gray shirt.
>> SHELLIE BLAKENEY: Hello, I’m Shellie Blakeney, I’m an African American woman and I’m wearing a black blouse with a cream-colored jacket.
Good afternoon.
>> SARAH LEGGIN: Hello, I’m Sarah Leggin.
I’m a white female.
I have blonde hair and I’m wearing a navy blue blouse and I’m looking forward to our panel discussion, thank you for having me.
>> CORIAN ZACHER: I’m so excited to be here. I’m a white individual with short brown hair and a blue dress with white color.
>> CM BORYSLAWSKYJ: OK, great.
We’re having some technical difficulties.
I don’t think — I think people are having a hard time seeing Shellie.
Is there any way you can brighten up your screen?
Because I think some people in the audience are having a hard time seeing your screen, just an FYI.
Do you have a shade behind you or something?
>> SHELLIE BLAKENEY: I’ll try that.
Sure.
>> CM BORYSLAWSKYJ: The interpreter as well needs to step a little bit closer to the screen, I think some people in the audience are having a hard time seeing the interpreter as well.
Do you mind stepping up?
Yep, perfect.
I know you’re a little short so if you could just — yep, that’s much better, much better.
All right, perfect.
So — that still is not — OK.
So my first question is, I know some may want to change the agenda for today’s meeting, so you can go ahead and explain what you wanted to do and you can go ahead and start.
And also our understanding is to focus on the three priorities on the agenda, broadband, and then the second one is affordability, and the third one is — the third one.
>> I believe the third one is smart cities.
>> CM BORYSLAWSKYJ: Smart cities, thank you.
>> SARAH LEGGIN: All right.
Well, this is Sarah speaking.
I think what we had discussed is we’d provide a little bit of an introduction about each of the panelists and where we work and a little background on our work for our respective companies and then we would love to turn to the first topic of discussion about broadband connectivity, if that works for the group.
So I’ll go ahead and start with an introduction.
I’m Sarah Leggin, and I’m the director of regulatory affairs at CTIA.
So I advocate for the wireless industry before the FCC and other federal agencies on various policy areas with a focus on consumer issues such as promoting wireless accessibility for people with disabilities.
At CTIA we’re really proud to represent the wireless industry, for the wireless providers, to manufacturers, and other innovators and we are really grateful to continually work outside our — alongside our partners like TDI to help empower millions of people with hearing loss through the revolutionary powers of wireless technology.
So before we dive into the discussion of those three main areas that CM describes, I was hoping to just give a little bit of an overview of wireless accessibility.
So wireless services and devices have become really central to consumers’ lives, particularly for people with disabilities since they can customize their wireless devices to help meet their unique and diverse needs.
And the importance and value of wireless services have become even more pronounced during the COVID pandemic.
And the wireless industry has really tried to rise to this challenge while continuing to deploy and enhance wireless services and devices.
So just as a general overview, 4GLT service is available to over 99% of the population today and wireless providers continue on working new 5G networks across the country and bringing new 4G and 5G devices to the market.
And providers are always looking to expand accessible options there and try to enable consumers to choose from as many accessible options as possible.
Today there are over 1500 accessible mobile phones that consumers can choose from and more than 30 manufacturers around the world.
At least based on the latest report from the Global Accessibility Reporting Initiative or the GARI database which I’m sure many of you are familiar with.
In addition to those enhancements, wireless prices in the U.S. have actually declined 45% since 2010.
And that’s all the while consumers are using more voice and text and data services than ever before.
So by deploying and improving accessible wireless services while making those services even more affordable, the wireless industry has really tried to enhance access opportunities for people with disabilities.
And as we’ll talk about more later, through support for wireless, from programs like the FCC’s emergency broadband benefit and the emergency connectivity fund which, again, we’ll talk about a little bit later, wireless innovations and services can really help bridge the digital divide for people with disabilities so that wireless can continue to help keep people with disabilities connected to affordable and accessible wireless services.
So to help make sure that people with disabilities, as well as seniors and veterans and their families and caregivers, can all find those devices and services that meet their needs, I just wanted to highlight that CTIA has a Web page called access wireless.org.
At that site CTIA shares information about wireless services, handsets and apps that help create new possibilities for all Americans and can help increase opportunities and inclusion for people with disabilities by helping people with a range of disabilities find the right mobile device that meets their unique needs.
So on the homepage, there is wireless.org, visitors can select from categories to help easily navigate which types of information you’re looking for, including hearing, vision, mobility, speech, and cognition as well as resources for seniors and veterans as well.
And the GARI tool is actually embedded right within the database, the website, excuse me, and it allows users to search and compare accessible devices and apps.
And the database of industry resources that we also have on the website is a detailed list of the devices and services offered by wireless carriers and our community partners.
And then just one specific page I hoped to highlight was on access will wireless.org’s deaf and hard-of-hearing page, visitors can find information on hearing aid compatible wireless handsets, Telecoil and T-coil coupling, closed captioning, video and text communications, visual displays, tips for real-time text or RTT, TTY compatibility, wireless emergency alerts, and more.
And then for individuals who prefer communicating with video using sign language and speech reading, they can navigate the mobile and wireless forums GARI database to search for phones that support their unique needs.
So to learn more, again, that’s access to wireless.org.
So, thanks so much for giving me extra time in this introduction.
And I really look forward to discussing wireless accessibility and how providers continue to work hard to help keep everyone connected to access wireless services and devices and then also participate in programs to help deliver even more affordable broadband services to all consumers, including those with disabilities.
Thanks.
>> ZACHARY BASTAIN: Thank you, this is Zach — go ahead, excuse me.
>> CM BORYSLAWSKYJ: I just wanted to mention the website is excellent, I wanted to praise Sarah for that.
If you can just post the website link into the chat, that would be great.
Thank you so much, Sarah.
And then up next, Zachary, you can go ahead and start.
>> ZACHARY BASTAIN: Thank you, CM.
Again, this is Zachary Bastain from Verizon, I work on our strategic alliances team as part of our public policy and legal strategy office.
I just wanted to say thank you to Sarah, CTIA does a great job representing our industry and really leads on a lot of accessibility issues, so access will wireless.org is indeed a great website to go to if you want to learn more about what the industry is doing.
It’s a thrill to speak to TDI, Verizon has had a longstanding relationship with TDI.
We really value you as our partner.
And we love meeting our customers through forums like this so please don’t hesitate to reach out if you have any questions after this meeting, I’d be more than happy to speak with you.
I would also like to point you to Verizon’s specific portal on accessibility which is verizon.com/accessibility.
And that will give you a lot more information on specific service offerings that are out there and also the specific types of accessible customer service that we have built into Verizon and the ways that you can choose to engage with us in a variety of forums just to get exactly what you need out of our service.
We’re really excited today to explore, you know, beyond the overall industry and what we’re doing as an industry to support accessibility, the specific things that Verizon has done to expand access and make sure that our tools are getting to the right place, some of our broad policy priorities that touch on accessibility and we think will drive better outcomes for people with disabilities trying to access the Internet, as well as some of the smart cities, focused partnerships that have us incredibly excited about the future of accessibility.
Somewhere we’re really right now looking for high-level stakeholders within the deaf, deafblind, and hard-of-hearing communities to make sure that we’re building standards for next-generation technology that are as accessible as possible.
And as always, we just value these opportunities first and foremost as a way to connect with you, a way to learn more about your concerns, a way to share information about what’s going on with us.
So it’s a privilege to be on this panel and I thank you so much for your attention today.
>> CM BORYSLAWSKYJ: Thank you, Zach.
And you have a wonderful Verizon commercial.
We love that commercial.
It shows disabled people and it’s just, wow, wow, it’s excellent.
You did an excellent job with that.
Thank you.
All right, who wants to go next?
>> SHELLIE BLAKENEY: That would be me (chuckle).
Thank you, Sarah, CM, and Zach.
By way of introduction, my name is Shellie Blakeney, and I’m a director with T-Mobile’s governmental affairs department based in Washington, D.C.
I advocate on behalf of the company and I track consumer protection-related topics such as accessibility and safety.
Our company, which is based in Bellevue, Washington, outside of Seattle is committed to a wireless experience that delivers top quality, quality, and phenomenal customer service.
We’re building and employing a robust 5G network and we expect it will feel product and communications innovations creating a more inclusive, equitable, and interconnected experience for all.
We’re very pleased to be here with you today and look forward to further discussion.
Thank you.
>> CM BORYSLAWSKYJ: All right.
T-Mobile is — is Sprint Relay, it’s all included, a lot of changes going on there, you’re running with it.
I definitely would love to talk about that topic with you a little bit later.
So I guess we’re going to move on to Corian.
>> CORIAN ZACHER: Thank you so much, Corian Zacher policy counsel at next century cities.
And next century cities we work with local leaders and organizations nationwide to ensure that communities have the tools that they need to play a meaningful role in connecting every member of their community with reliable, affordable high-speed Internet access and I saw that someone shared in the chat that people in Arizona are disconnected and that the broadband data doesn’t accurately show how many people are lacking access and that’s been a really core part of our work.
In May we released a broadband mapping report that really highlights what local governments and state governments have done to improve data collection.
But that has really been a core part of our work over the last few months.
And many of the cities and counties and towns and villages that we work with have also reached out to residents directly to learn more about who remains disconnected, not just from infrastructure but also who doesn’t have the digital literacy tools or devices they need to use the Internet.
And so an increasing number of government and private services moved online as we know and prioritizing and planning for digital equity and inclusion has really become a key strategy to achieving universal broadband and at NCC we work with communities to make sure that they are a part of that solution and that they have the resources and funding that they need to really fill those broadband gaps and to make sure that everyone in the community is able to use the Internet equitably and they have the services they have to meet their ongoing needs as technology advances.
I’m so excited to be here today.
I look forward to sharing some insights that we learned from speaking with those who are working at the local level on connectivity initiatives.
>> CM BORYSLAWSKYJ: Excellent.
It’s an excellent topic.
It’s very important for all of us.
There’s a lot of deaf people, I would say, well, 50%, somewhere in that range, that can afford — they can’t afford Internet at all.
So when we’re talking about 150 or $200 per month, they can’t manage that.
And deaf people rely on VRS and they can’t afford these services so now I can’t even make calls.
There’s, I would say, for example, when we’re talking about the speed, 150 megabytes bps on average is what we’re seeing but with the videophone that doesn’t require that 300 or more, and companies will charge you money getting at a 300 or more rate.
When you talk about net neutrality, that’s something we have to look at and that has to be removed, and, of course, they remove that so people can get higher speed Internet so I’d like to talk about that later but I want to talk about this issue, first and foremost, is there anything you want to bring up now that you feel is important to you?
Any current topics that are related to broadband?
I understand Sarah was wanting to discuss some current issues that are going on with broadband.
But I do like the topic that Corian brought up as well in terms of the gaps.
So I just wanted to see what current issues that you want to talk about that relate to that.
What are you seeing?
>> SARAH LEGGIN: This is Sarah.
I’m happy to kick off the discussion about just making sure that we’re focused on getting everybody connected to broadband.
And I think Corian touched on some of those issues as well.
I know that Shellie and Zach and CTIA and the wireless industry are really focused on that because we know how important getting everyone connected is and we know how important wireless connectivity, in particular, is and has been as everything moved to remote in particular and then also given that wireless technology is having a lot of accessible optionality and the ability that people with disabilities have to use those to meet their needs.
So just to talk a little bit about what we’re seeing and just kind of what the wireless industry is doing in relation to focusing on getting everybody connected to broadband.
The wireless industry invests in and deploys advanced networks all the time to try to help bring even more connectivity and faster speeds to consumers across the country.
As I noted before, 99% of the country has access to several different 4G providers but we know that the work is never done, so we are always investing in and deploying more networks to help make sure that everybody’s connected.
We’re always investing in increasing speeds as well.
And the average 4G download speed is actually 31 times faster than it was in 21, moving from 1.3 megabits per second to 41 megabits per second.
So that’s a big increase but we know there’s always work to be done so we’re always trying to improve that for consumers across the country.
In addition to faster speeds, we’re always working on enhancing our data packages so that consumers can have access to enhanced or unlimited data and even more powerful mobile devices to meet increasing demands every year.
These network improvements in 4G were a real big catalyst and a lot of the significant increases in accessibility features and wireless devices today, and also really laid the groundwork for the boom of the app economy and a lot of those apps are geared toward meeting the needs of people with disabilities.
In addition, 4G networks allowed for features such as high-definition voice and voiceover LTE which allows for simultaneous data and voice use as well as higher bandwidth than previous technologies to enable a better experience.
These increased transmission rates really enabled new ways of working, learning, and communicating over video for consumers who are deaf and hard of hearing and speech-impaired as well as for people with mobility limitations because they help enable communications wherever you are, whether you’re working remotely or just at home trying to communicate with friends or family so 4G LTE really inspired a lot of innovation and promoted an astronomical increase in a lot of these optionalities and apps to help meet the needs of people with disabilities.
As I mentioned before, more recently providers have been hard at work building new 5G networks across the country and bringing more 5G and 4G devices to the market.
These networks and devices will help revolutionize things like healthcare, transportation, and educational services all of which will enable people with disabilities to open new doors to educational and employment opportunities among many other aspects of a more inclusive world particularly as consumers have moved to a virtual or remote way of life more and more during the pandemic.
And just a few notes about the kind of wireless connectivity during the pandemic, in particular, this was just, obviously a huge, unique challenge that I think we’re continuing to reflect on and learn from and work on.
So just based on just kind of a look back at the sudden change that really happened overnight for consumers, including consumers with disabilities, wireless connectivity became even more crucial so as social distancing kicked in consumers leaned more and more on their wireless services and we saw huge increases in voice and data traffic in different places as well as a host of new use cases that we didn’t see before.
A few stats that I think are really helpful to understand, compared to before the pandemic, consumers’ usage increased throughout the pandemic up to 40% more than it was before.
And wireless providers not only really supported this dramatic increase in usage, but we also continued to strengthen networks delivering median wireless speeds that increased by nearly 50% compared to before the pandemic.
So as you can see by those numbers, wireless networks really tried to rise to the occasion to support this new surge in use and that’s facilitated by a lot of policies that help enable providers to invest and deploy these new networks as well as legislation like the Communications and Video Accessibility Act that many of you are familiar with that help promote advancements in accessibility through flexible and innovation and investment-friendly policies.
So all that said, we know that there is always more work to be done to make sure that we connect everyone and ensure that everyone has enough connectivity.
So to do that, in addition to network and device deployment and innovation, we also, as a wireless industry, have focused a lot on programs to help ensure that consumers remain connected, particularly low-income consumers.
We also helped ensure that over 2.4 million students were able to stay connected during the pandemic through remote learning solutions including things like wireless hotspots.
So I’ll let Shellie and Zach talk a little bit more, if they want, about those particular programs and other efforts of the wireless industry to connect everyone to the mobile wireless broadband.
>> ZACHARY BASTAIN: Sure and this is Zach speaking — go ahead.
>> Thank you so much, thank you, and Zachary, please go ahead, Zach, did you have your hand up.
>> ZACHARY BASTAIN: Yes, I’m sorry.
So I just wanted to piggyback and include some points in there because I’m seeing in the Q&A and from some of the questions that have already come in on requests of affordability so I just wanted to touch on that because it does seem relevant to our discussion right now.
First of all, Verizon has a program — Verizon has a program called connected learning where we did something similar to what Sarah was referencing on helping to connect students during the pandemic where we order a 4G LTE service at a deep discount where we were able to offer it at a bulk rate to school districts.
So whole states could negotiate this rate to keep their students connected at home during the pandemic.
We also launched the Verizon five sport package which I’m going to drop into the chat right now so you can check it out and this is actually a home Internet FiOS service available at $20 a month if you qualify for Lifeline for 200 megabits up and down home Internet service so a very strong, robust Internet option available at a low cost.
CM, did you need to say something?
I just saw you signing.
I just wanted to make sure I wasn’t interrupting.
But beyond that, we actually agree that affordability is a big issue that has led to our support of the emergency broadband benefit and membership in the broadband equity alliance which is a coalition of industry and advocacy groups united to support a more long-term affordability benefit.
The EBB which the FCC, the Federal Communications Commission, administers has already seen a lot of demand and we believe that the FCC has learned quite a lot about the administration of the benefit.
We’ve been a partner in making outreach to make sure that people know that the emergency broadband benefit is out there and how to best expeditiously get the funds to people that need them.
But when that money runs out — and it will — the prime affordability isn’t going to go away so now is the time to tell your member of Congress, tell your senator that this is the time to create, within the infrastructure package, a more long-term affordability benefit to allow this type of subsidy so as many people who need Internet service can get it.
And our position is that the way that this benefit should apply it should be as agnostic as possible based on the Internet that’s available in your area, whether that’s wireless, whether that’s cable, whether that’s wired Internet, whatever it is, we think the benefit should be able to go towards that as long as it meets the FCC’s definition of broadband.
And we’ve learned a lot through our conversations with stakeholders within your communities.
For example, the American Association of people with disabilities pointed out that many people with disabilities have roommates.
So their suggestion was they think the affordability benefit should apply to each person within a household who would qualify for it versus the household as a whole.
And this is really the exact kind of specific advocacy that speaks to those concerns that would help our representatives understand your community’s needs and write legislation that will meet them.
So a lot of great questions about affordability and that’s just one suggestion that we have on a way to approach it but obviously, it’s a very complicated problem.
>> CM BORYSLAWSKYJ: (No sound).
>> SHELLIE BLAKENEY: Hi, CM, if I may, I just want to touch on one of the programs I know that T-Mobile was involved in to assist as an initiative called our project 10 million.
Thank you for allowing me, you know, time to just mention it.
Project 10 million is our company’s $10.7 billion initiative and delivering Internet connectivity to underserved — millions of underserved student households at no cost to them.
We’re partnering with various school districts across the country.
And the program offers wireless hotspots along with free high-speed data and access to laptops and tablets at cost.
So I definitely wanted to mention that.
And then also touch on and just mention that T-Mobile is taking part in some of the programs that Zach mentioned, the EBB and the ECF programs.
And I don’t know if this is a good time but would also like the opportunity just to talk a little bit about some of the areas involving emergency communications, where wireless enables consumers to remain connected during times of emergencies.
Specifically, our role-plays as Americans helpline, and makes today’s — it makes wireless — Americans connect to life-saving services and information when they need it most, both during natural disasters and emergencies, mobile phones have helped consumers reach first responders through 911 and access wireless emergency alerts with critical and timely information from public safety officials that keep us safe.
And just wanted to mention a for-instance, text to 911 enables millions of consumers to send a text message to a 911 emergency call-taker that accepts text to 911.
The service has undoubtedly saved countless lives since wireless providers made it first available nationwide in 2014, so it’s been around for some years now.
The FCC encourages emergency call centers to begin accepting texts but the method is actually up to each call center.
Call centers that accept text to 911 vary from state to state, and consumers will receive a bounceback message if their local emergency call center does not support text to 911.
Also, consumers have the option to check services available in their area and the information is maintained on the FCC’s website.
At the end of 2019, approximately 27% of the nation’s nearly little over 5,000 emergency call centers also known as PSAPs could receive text messages and that’s a very impressive number, it’s close to 50%.
I also want to just talk to you a little bit about how one would go about sending a text to 911 and in order to send it, you would first open up your phone’s text messaging icon at the top of the screen, where you type a contact’s name or phone number, you would simply enter the numbers 911.
You would type the message that describes the emergency, whether you need medical assistance, fire assistance, or police.
And you would also like to include your location, city, and state, if at all possible.
And if you’re able to do so, you will provide information that can help the emergency responder find you such as what room you’re in or what your surroundings look like.
And if the text to 911 is not available in your area, you will receive a message back confirming that the message has not been delivered and we refer to that as the bounceback messaging that I had mentioned before.
And if that happens, we would encourage the end-user to go ahead and call, make a call, or dial 911.
And also, if I may, I’d like to just pivot and talk a little bit about the text to 988 initiatives, a lot of discussions are underway there.
95% of wireless consumers throughout the country can connect to the national suicide prevention Lifeline by dialing the numbers 988 on their mobile wireless device for help with mental health or suicidal inclinations and resources, they need to prevent a tragic outcome, wireless providers made this important tool available a full year ahead of the FCC’s 2022 deadline for all voice-calling services.
At this time, you may have heard this mentioned in other previous panels at this conference, but there’s a rule-making proceeding that’s currently underway at the FCC exploring potential text to 988 models, FCC docket 18336.
CTIA and industry partners are participating in these discussions and we’re all looking forward to collaborating with the FCC and the accessibility community and engaging in further conversations on this very important topic.
One other area I’d just like to touch on in the way of emergency communications is wireless emergency alerts also known as WEA.
WEA is a learning network designed to send alerts to mobile devices to enhance public safety.
WEAs are sent to mobile devices by an authorized local, state, or federal public safety official to alert individuals to an emergency in their immediate vicinity.
Over 95% of consumers in this country are served by a provider who supports the wireless emergency alert program.
All three nationwide providers support the service.
Information about the WEA program is conveniently located on all of our websites and in some cases there may be a special WEA symbol or icon on the device packaging so that a consumer can determine whether a particular device supports wireless emergency alerts.
Thank you.
>> CM BORYSLAWSKYJ: Thank you.
Thank you, Shellie.
Shellie, can you actually post that information with the links in the chat?
Do you mind just typing that out?
>> SHELLIE BLAKENEY: Certainly.
>> CM BORYSLAWSKYJ: Great.
Thank you so much.
All right, Corian, is there anything else you wanted to add?
>> CORIAN ZACHER: Sure, I’m happy to speak to some things that we’ve heard from the local officials that we work with in terms of challenges to connecting everyone with broadband.
I see a lot of questions in the Q&A about affordability and another related problem that we hear about is that the current minimum broadband speed just isn’t sufficient to meet ongoing household needs, particularly as several people are working and learning from home.
And as was mentioned earlier, that there are a lot of people who have roommates with the emergency broadband benefit only applies — there’s only one benefit per household.
And speeds have become a significant barrier, especially since they are generally not symmetrical so with the upload speed being so much slower than the download speed accessible services aren’t always relayed as quickly as they could be so as we’re thinking about building infrastructure, we really need to think about access and adoption and how people are actually going to be able to use the networks that are being built, so building high speed, reliable and easily upgraded networks is really important to ensure that in the long run, people are able to use accessible services.
And also that the services are affordable.
So whenever we think about affordability, I saw that someone in the Q&A asked about people who can’t afford $20 a month.
We hear about several different kinds of affordability when we’re meeting with local officials.
So in some areas where there’s not a lot of competition, prices are generally high, and the leaders there say that in general, prices could be lower but that there are some members of the community that will never be able to pay the full price, and low-income programs can help bridge that divides by making services more affordable.
We’ve also seen communities step in and build, like, mesh networks and other wireless networks that are available for free to community members.
There are some challenges on the speed side of those.
So they’re really not a good long-term solution, but in the meantime, they can help bridge the divide for people who can’t afford broadband service at all.
And we know that broadband is especially important for low-income households, especially as the pandemic has moved, a lot of government services and information services are online, and people are using the Internet to connect with job applications and educational tools.
So really being able to use those services is so important to the people who can’t afford them more than anyone.
And with the emergency broadband benefit, we’ve also seen that some residents don’t have the information that they need to enroll in the program.
So at NCC, we’ve actually put together some resources that communities have used to reach out to people.
Some have set up phone hotlines or have actual people in their community, educators and others, who have learned about the program and who are on the ground explaining to people how to enroll and really helping them sign up.
So we think these are really important steps that should be continued in the future, if there is a permanent broadband benefit, and we do hope that there is some sort of long-term solution to addressing affordability because Lifeline is just not enough for people to have the broadband description that they need to use subscription that they need to use in the long run.
As I mentioned earlier, we’ve been concerned about mapping at NCC, and in order to be able to expand networks in a way that’s equitable and that everyone has service, we need to know who has served and who doesn’t.
There’s sort of another side of this affordability problem that we haven’t talked about which is that some people live close to where a network is but they’re quoted thousands of dollars to tens of thousands of dollars to actually be able to connect their specific home to the network and so that’s a huge challenge for people who can’t afford the service that they need even on a baseline and to be asked to pay tens of thousands of dollars is unrealistic for a lot of people.
So in the long run, we need to think about what sort of programs we could have at the state or federal level that could support that sort of deployment.
And we’re seeing communities sort of being able to step in and bridge some of those gaps, but there’s still a lot of work that needs to be done.
And the state and the federal government can really provide a lot of support that enables communities to be able to help those residents be able to get online.
>> CM BORYSLAWSKYJ: Perfect.
Thank you so much.
I absolutely agree.
And it really depends on the location and the region that you’re in.
I was actually just reading through the chat, and I just need to remind several of you that, please (?) post them in the chats, and then up next, yeah, some people are not qualified for the Lifeline so that is one of the biggest problems.
Another problem is they can’t afford their mobile because they might have access to wireless broadband, but the problem is they can’t afford the mobile and the monthly payments that come with it.
So there are a lot of pros and cons, but I think that that is just — ought to think about the discrepancies that come with that because if we provide free equipment especially like videophones, for example, that many deaf people cannot afford what comes with it, you know.
So we can’t meet their needs if they can’t afford it.
And then we have to kind of bring it up to the Internet providers.
Maybe a reduced-price package.
I mean, that’s really the big issue today.
So anyway.
The next question, I think we’re open for questions, some Q&A.
If I can look at that and I can read through the chat.
Can you guys see the Q&A?
>> SARAH LEGGIN: This is Sarah.
Yes, I can see the Q&A.
And as you’re reading that, I just wanted to jump in and note that in addition to Corian and NCC’s efforts to help make sure that consumers have all the information they need about how to use the emergency broadband benefit, CTIA also has a page that provides additional information on that program as well.
And I believe we’ve just posted it into the chat in case you want to check it out.
And — and we also work with our state and local outreach programs to work with groups that provide culturally relevant and non-English language coverage as well.
So please check those out as well.
Because, you know, the goal here is to try to get everybody connected and to help make this program as beneficial to everybody as possible.
You know, I think, as was discussed just now, there is always work to be done to continue to improve these programs and get them to more consumers every day.
But area seeing some progress even EVB has enrolled over 4 million consumers now and we know there’s still work to be done but that is a good step in the right direction, at least.
And just as an overview for people that might not be familiar with it, the EVB provides a discount of up to $50 a month toward broadband service for eligible households and then up to 75 a month for households on qualifying tribal lands and then eligible households can also receive a one-time discount of up to $100 to purchase a laptop or desktop or tablet and more information on the details is available on our CTIA page on EBB as well.
And, again, I think there’s always work to be done to improve these programs.
CTIA supports the commission’s efforts to continue to do that.
But one thing that is really positive about these programs is that the commission recognized the really profound role of wireless services and devices and meeting consumers’ connectivity needs throughout the pandemic and also explicitly recognized the need for those services to be accessible for people with disabilities.
And the programs help ensure that future wireless innovations and services can be used to help bridge the digital divide for people with disabilities by recognizing that wireless can continue to help meet the needs of people with disabilities.
So we really support the commission’s recognition of wireless accessibility and the need for those services and devices to be in part of these programs to help meet the needs of people with disabilities.
Thanks.
>> CM BORYSLAWSKYJ: All right.
So the next question, do you see Mark cedar’s question?
Can everyone see that?
Has everyone seen Mark Cedar’s question?
Yep.
Have you all had a chance to read it?
All right, so I’ll go ahead and restate it.
So we need over 400 — the best opportunity for people to use the videophone, but typically, if you have under 200 MPS it’s not going to work just like Jack just said so I’m just wondering how we can resolve this.
This is a huge question that everyone’s having is how can we increase those so we can fit the speed capability that people are needing to connect to the Internet.
So I wanted to pose to anybody on the panel if anyone is willing to answer.
>> CORIAN ZACHER: I’m happy to speak to some things that communities have done.
So some of the communities that we work with have offered gigabit speeds for a long time through municipal fiber to the home networks and those aren’t a perfect solution for every community but really investing in technology that enables long-term high-speed communications and can be upgraded over time to support higher speeds, so some of the communities that we’re offering gigabits (?) for the last ten years are now offering 10-gigabit services and with the emergency broadband benefit we were really fortunate that they were included in that program.
So including municipal networks in long-term subsidy programs, while it’s not a solution for everyone, it can really support people who are living in communities that have municipal networks to be able to have high-speed services at an affordable price.
And in terms of thinking about this more broadly in a policy context, supporting the infrastructure that makes high-speed connections available is a really important starting point for locations that just don’t have high-speed service available.
And there are so many areas where services just are not fast enough so even if there is an emergency broadband benefit provider, there might not be the underlying infrastructure to provide faster speeds.
So those are just a couple of things that we’ve seen from communities that we work with.
>> CM BORYSLAWSKYJ: I’m going to ask one more question because we’re getting close to time.
And just give me a second so I can look at my notes.
Very forgetful.
So sometimes I have to rely on my notes.
OK, OK.
I got it.
So we’re talking about 3G networks.
They’re going to sunset those by 2022.
Are there any types of (?) that’s going to happen with the phone networks when 3G stops and becomes sunsetted?
Have you heard any information about that?
Any issues that you foresee coming up?
>> SARAH LEGGIN: This is Sarah.
I can start the conversation on that topic.
And thank you so much for that question.
I know that there’s a lot of attention and, you know, questions being raised about what the transition means.
So, as I said before, the vast majority of consumers already have transitioned to 4G LTE but we know that that’s not everybody.
And so — and, again, that’s 99% of Americans have access to at least three or more 4G networks in their area but, yeah, we still know that there are still Americans across the country that are using 2G or maybe 3G devices and networks.
But with 5G service expanding rapidly, wireless providers are planning on retiring that 2 G and 3G networks to help free up more spectrum that will help fuel the next generation of service.
That’s because 4G and 5G networks and devices bring faster speeds and better coverage and enhance security options to consumers.
So to help achieve the benefits of those more advanced networks, providers need to kind of repurpose their spectrum resources to allow newer, more efficient generations of wireless networks to be deployed.
So today that network — that work is happening, as providers transition the spectrum that used to be used for 2G and 3G networks to 4G and 5G networks.
So we’ve been through these transitions before with technologies, things like the transition from analog to digital, and providers are very conscious of ensuring that there really isn’t a disruption to consumers as these transitions happen.
So, again, we’ve been through this before, and there are resources available from your wireless providers’ website to learn what exactly your provider’s plans are for your area.
So I would encourage you to look at your — you know, CTIA’s website has resources on this and then individual providers as well.
And, again, you know, the important thing to take away is that a lot of these transitions are really focused on the — bringing faster speeds and to power more advanced applications that will benefit Americans across the country as the transition continues and more and more Americans have access to these next-generation networks.
>> ZACHARY BASTAIN: And I just — just to jump in real quick, Sarah, but I wanted to touch on something that I saw Debbie Hagner ask in the chat which is if you have 5G will you need to get a different kind of phone.
And only a few devices at this point actually have what we call 5G ultra wideband antennas in them at this point.
So even if your area doesn’t currently have that high level robust 5G-level service, the phone that you have is still going to have the antenna so you can get the 4G LTE service that you were getting prior so you don’t need to worry that just because 5G is rolling out that your phone will stop working so great question, Debbie.
>> CM BORYSLAWSKYJ: Does anyone have any last comments?
I’ll turn it over to the panel.
>> CORIAN ZACHER: I saw the question in the chat about the top three things that cities could do to make broadband more affordable and accessible and we didn’t touch on it much but a lot of cities have engaged in digital equity and inclusion planning which involves reaching out to the community and asking them what they need because really affordability is such a local project, so understanding what local residents actually need is the first step to understanding how to solve affordability.
And also asking people, you know, what they need in terms of service, hearing from people who are using accessible technologies and asking them what they need to be able to use those technologies ubiquitously and with other people in the home is really important so I would say the first step there is to reach out to people to really collect what people need and then to take steps in the direction of being able to solve those problems.
>> CM BORYSLAWSKYJ: Thank you.
Thank you, Corian.
Well, I think it’s time for us to close out the session.
We have a couple of minutes left, so if anyone wants to get any more comments in before we closeout.
>> ZACHARY BASTAIN: Thank you, CM, great job moderating this panel.
I know we didn’t really get so much to the smart cities part of the discussion.
I just wanted to let the group know that I have been the co-chair for a standards group at the consumer technology association for the last couple of years trying to put together navigation standards, first for people who are blind or low-vision, we did a standard last year on — for people who are — have cognitive or developmental disabilities.
And right now we’re trying to put together a group specifically designed for people who are deaf, deafblind, or hard-of-hearing to create navigation standards for the population.
>> Agree, the same thing, that needs to happen, yep.
>> ZACHARY BASTAIN: Yes.
So we’ve engaged with, for example, the National Association of the Deaf, deafblind citizens of action, yeah, so thank you again for the opportunity but we’re really excited for some of the navigation opportunities in smart cities.
>> CM BORYSLAWSKYJ: Absolutely.
Thank you.
Thank you for everyone being here.
Really grateful that you’re here and I hope you enjoy the rest of the conference.
Take care.
>> Thanks, everyone.
>> CM BORYSLAWSKYJ: All right, bye.
Closing Ceremony
Christine Sun Kim, Ian Sanborn, Mervin Primeaux, and John Kinstler
Transcript
Full Transcript of yada yada