Proceedings
January 16 – 18, 2019
University of Washington, Seattle, WA
This publication shares the proceedings of the AccessCyberlearning 2.0 Capacity Building Institute (CBI), which was held at the University of Washington in Seattle, January 16 – 18, 2019. The content may be useful for people who
The AccessCyberlearning 2.0 Synthesis and Design Workshop is funded by the National Science Foundation’s (NSF) Cyberlearning and Future Learning Technologies program of the Division of Information and Intelligent Systems (#1824450). AccessCyberlearning 2.0 aims to conduct exploratory research to inform the design of the next generation of digital learning environments for science, technology, engineering, and mathematics (STEM) content.
Led by the DO-IT (Disabilities, Opportunities, Internetworking, and Technology) Center—which has decades of expertise in designing welcoming, accessible, and usable websites, documents, videos, and digital learning activities—the AccessCyberlearning 2.0 Synthesis and Design Workshop sought to answer four research questions that emerged from its current AccessCyberlearning project which is also funded by NSF (#1550477):
To address these questions, the 2.5-day synthesis and design workshop and follow-up engagement will
In this CBI
The CBI included presentations, panel discussions, and group discussions where CBI participants shared their diverse perspectives and expertise. The agenda for the CBI and summaries of the presentations are provided on the following pages.
8:00 – 9:00 am
Breakfast and Networking
9:00 – 10:30 am
Welcome, Introductions, Overview AccessCyberlearning 2.0
Sheryl Burgstahler, University of Washington, PI
Research Questions, Project Tasks and Products: Overview of approaches to access for individuals with disabilities, challenges faced by students with disabilities, and teaching strategies that can make online learning accessible to students with disabilities. (addressing research questions 1 and 2)
10:30 – 10:45 am
Break
10:45 – 12:00 pm
Accessible Technology
Terrill Thompson, University of Washington
Current strategies for making websites, videos, documents, and digital tools accessible to and usable by individuals with disabilities (research questions 1 and 4)
12:00 – 1:00 pm
Working Lunch
Lunch and discussion: How do current digital learning research and practices contribute to the exclusion and marginalization of individuals with disabilities? (research question 2 and tasks 1 and 2)
1:00 – 2:00 pm
Computing Research Practices
Richard Ladner, University of Washington
Computing research design practices (including recruiting and analysis) that exclude and marginalize individuals with disabilities; inclusive research design approaches; recommendations for the future of digital learning research (research questions 2–4)
2:00 – 2:15 pm
Break
2:15 – 3:30 pm
Panel: Perspectives of Students and Instructors with Disabilities
Participants share their experiences and recommendations regarding engagement of individuals with disabilities in cyberlearning (research question 1)
3:30 – 4:00 pm
Report Out
Report out from lunch discussions
4:00 – 4:45 pm
Preview of Tomorrow’s Topics and Work Groups, Complete Daily Feedback Form, and Pose for Group Picture
6:00 – 7:30 pm
Working Dinner
Buffet dinner and continued discussion regarding research questions 1 and 2.
8:00 – 9:00 am
Breakfast and Networking
9:00 – 9:30 am
Review and Overview
Share ideas generated last night. Introduction to today’s agenda
9:30 – 12:00 pm
Panel
Panel of leaders share research and practice issues and findings at the intersection of accessibility and cyberlearning. Large group Q&A and discussion
Aaron Kline, Stanford University
Prasun Dewan, University of North Carolina Chapel Hill
Shiri Azenkot, Cornell Tech
Sofia Tancredi, University of California, Berkeley
Mike Jones, Brigham Young University
Lorna Quandt, Gallaudet University
Ray Rose, Online Learning and Accessibility Evangelist
12:00 – 1:00 pm
Working Lunch
Lunch and discussion: What specific actions can digital learning researchers, funding agencies, educators, and other stakeholders take to systematically address issues with respect to disabilities? (research question 4)
1:00 – 1:30 pm
Report Out
Report out from lunch discussions
1:30 – 4:30 pm
Developing Products
Review of project products and organization of groups and tasks. Work within small groups that each focus on a specific contribution to project products (e.g., draft accessibility guidelines for cyberlearning researchers, create online resources, develop a section of the project white paper focusing on one research question)
4:30 – 5:00 pm
Report Out from Small Groups, Preview of Tomorrow’s Topics and Complete Daily Feedback Form
8:00 – 9:00 am
Breakfast and Networking
9:00 – 9:15 am
Review and Overview
Share ideas generated last night and introduction to today’s agenda
9:15 – 11:30 am
Developing Products continued
Continue small group work from yesterday
11:30 – 12:00 pm
Wrap up, Discussion of What Remains to Be Done, Community of Practice, Evaluation
Presented by Sheryl Burgstahler, University of Washington
I taught my first online class in 1995, before the internet was widely used. This was a class on adaptive technology for people with disabilities. I taught the class with professor Norm Coombs, who is blind. We took steps to showcase how it is possible to design an online course that’s accessible to any potential student, including those with disabilities. Although the digital tools are different and more complex, I strive to reach this goal in the online classes I teach today.
According to the US Department of Justice and the Office of Civil Rights of the U.S. Department of Education, “accessible” means “a person with a disability is afforded the opportunity to acquire the same information, engage in the same interactions, and enjoy the same services as a person without a disability in an equally effective and equally integrated manner.”
There are two approaches for making our campuses accessible: accommodations and universal design (UD). Accommodations are reactive in adapting a product or environment to make it more accessible to an individual who finds it inaccessible (e.g., captioning a video when a student with a hearing impairment requests it). UD is a proactive approach to create all aspects of a product or environment as accessible as possible as it is being designed. As defined by North Carolina State University’s Center on Universal Design, UD is “the design of products and environments to be usable by all people, to the greatest extent possible, without the need for adaption or specialized design.” A building entrance that is technically accessible might have a separate ramp for people with wheelchairs or who cannot use the stairs, while an entrance that is universally designed might have one wide, gently sloping entrance that is used by everyone entering the building. Universal Designs are accessible, usable, and inclusive. Universally designed technology builds in accessibility features, is flexible, and is compatible with assistive technology.
Ability exists on a continuum, where all individuals are more or less able to see, hear, walk, read print, communicate verbally, tune out distractions, learn, or manage their health. Regardless of where each of a student’s abilities fall on this continuum and regardless of whether or not they disclose a disability or request accommodations, we want to ensure that they have access to the classes we teach and resources we share.
Postsecondary efforts to include students with disabilities typically focus on making accommodations for students with disabilities. At the UW, we remediate over 30,000 PDFs and caption over 60 hours of video each quarter as accommodations for students. If faculty designed their classes with universal design in mind, these numbers would be reduced because documents would be universally designed in accessible formats and videos would be captioned for the benefit of everyone. Students wouldn’t need to be accommodated. More than just people with disabilities are helped by UD—sloped entrances benefit people moving carts, and captions help those learning English or viewing in noisy environments.
UD values diversity, equity, and inclusion and can be implemented incrementally. Universal design of instruction (UDI) focuses on benefits to all students, promotes good teaching practice, does not lower academic standards, and minimizes the need for accommodations. UDI can be applied to all aspects of instruction, including class climate, interactions, physical environments and products, delivery methods, information resources and technology, feedback, and assessment. For specific tips on designing an accessible course, follow the 20 Tips for Teaching an Accessible Online Course. Other resources can be found in DO-IT’s Center for Universal Design in Education (CUDE).
Presented by Terrill Thompson, University of Washington
How do we overcome large barriers? We innovate, and we refine our innovations over time until they’re better and more inclusive. Throughout history, innovation has often initially excluded groups of people. For example, the Gutenberg Printing Press produced the first mass printing in 1452, but print remained inaccessible to people who are unable to see for nearly four centuries (Braille was invented in 1829, and the first electronic screen reader was introduced by IBM in 1986). Similarly, television appeared in the 1920s, but the first captions for people who are deaf or hard of hearing didn’t appear until 1972, and audio description for people who are blind followed in 1988.
In contrast, HTML included accessibility features from the beginning (e.g., alt text for images, hierarchical heading tags for document structure), demonstrating that it is possible to innovate without erecting barriers.
When we’re creating digital content such as web pages or online documents, we may envision our typical user as an able-bodied person using a desktop computer. In reality, users utilize a wide variety of technologies to access the web, including assistive technologies and mobile devices. Everyone has a unique combination of levels of ability when it comes to seeing, hearing, or using a mouse or keyboard; there is a wide variety of technology and software tools that people use to access information online. But are digital learning environments always accessible to or usable by students or instructors using assistive technology? In order to ensure our digital resources are accessible, designers, developers, and content authors must understand that users are technologically diverse, and familiarize themselves with a few simple accessibility standards, tools, and techniques. One simple test is to try navigating your own online resources (e.g., websites, software, assessment tools) without a mouse (nomouse.org). HTML websites, rich web applications, Microsoft Office documents, and Adobe PDF files can all be accessible to all users, but only if they are designed and created with accessibility in mind.
Most students now interact with an Learning Management System (LMS) for accessing course materials, engaging in class discussions, turning in assignments, completing assessments, etc. Most LMS’s have reasonably good accessibility. However, each educator must keep accessibility in mind as they select plug-ins and create or upload course content. Many students and professionals may interact with web conferencing, videos, and collaboration tools as well—these tools also need to be made accessible and easy to use by all.
The most common guidelines for designing accessible technology are the Web Content Accessibility Guidelines (WCAG), published by the World Wide Web Consortium (W3C). WCAG 2.0 (2008) is organized into four main principles; information should be perceivable, operable, understandable, and robust. Each of these principles is defined by more specific guidelines, and those are further defined by specific success criteria, each assigned Level A, AA, or AAA, in descending order of priority. WCAG 2.0 Level AA is widely identified in legal settlements, resolutions, and policies as the expected level of accessibility for websites.
If websites include rich, dynamic content (as opposed to static materials), ensuring their accessibility will likely depend on use of Accessible Rich Internet Applications (ARIA), a markup language that supplements HTML with attributes that communicate roles, states, and properties of user interface elements to assistive technologies. ARIA answers questions like “What is this?”, “How do I use it?”, “Is it on/selected/expanded/collapsed?”, and “What just happened?” The W3C maintains an extensive set of design patterns for common web widgets within its WAI-ARIA Authoring Practices document (ref). If creating web applications that include any of the components defined by the W3C, their recommended design patterns should be implemented in order to ensure that users encounter consistent, reliable user interfaces. Otherwise, users (especially keyboard users and assistive technology users) have to learn an entirely new interface every time they visit a new website.
For more information about IT accessibility, consult the following resources:
Presented by Richard Ladner, University of Washington
What is Cyberlearning? According to the Center for Innovative Research in Cyberlearning (CIRCL), cyberlearning “applies scientific insight about how people learn, leverages emerging technologies, designs transformative learning activities, engages teachers and other practitioners, measures deeper learning outcomes, and emphasizes continuous improvement. “ In looking over this description I found it needed something more. As such, I’ve added another focus: It supports cyberlearning for all. Cyberlearning is about people, particularly students, and they come in a wide variety of abilities.
I am a professor emeritus at the University of Washington, and I’ve been on the faculty since 1971. I have seen the growth of computer science over the past 48+ years. For the past 15 years, my focus has been on accessibility research and two collections of grants: AccessComputing and AccessCSforAll. My accessibility research in learning has focused on K-12 and college levels, as can be seen in the following projects: Tactile Graphics, ASL-STEM Forum, ClassInFocus, BraillePlay, Blocks4All, and Accessible Computer Science Principles.
There are a lot of students with disabilities. The Individuals with Disabilities Act (IDEA) covers about 13% of K-12 students with disabilities nationally. These students have Individual Education Programs (IEPs) that establish their educational goals and identify accommodations they need to reach these goals. In addition to IDEA, about 2% of students with disabilities are covered by Section 504 of the Rehabilitation Act. These students have the same education goals as mainstream students but require accommodations to ensure access to the curriculum. In total, about 15% of K-12 students in the US have identified disabilities. In Washington State the percentages are higher with 13.8% IDEA students and 3.2% Section 504 students. These add up to about 17% of the 1.1 million K-12 students in Washington State public schools. In higher education, 11% of undergraduate students have disabilities and 5.3% of graduate students have disabilities.
The biggest barriers to education are teachers’ and administrators’ attitudes. Students with disabilities were historically excluded, though more recently they became included through accommodations and the application of Universal Design for Learning (UDL). Nonetheless, the IEP process can lead educators to set a low bar for the educational goals of their students with disabilities. Attitudinal barriers for students with disabilities can come from low expectations and a focus on compliance, rather than on welcoming students as part of a diverse student body. Technology is often a barrier because almost all new educational technology is not accessible to many students with disabilities from the beginning. This includes most cyberlearning tools. Cyberlearning should be for all students regardless of disability.
There are multiple design concepts in human computer interaction to think about when designing a cyberlearning tool. You can design for accessibility using universal design and ability-based design. We also use user engaged design, which includes three perspectives: user-centered design, participatory design, and design for user empowerment.
The design cycle has four phases: analysis of the problem to be solved, design of a solution, prototype, and testing. This cycle is repeated until the problem is solved satisfactorily as judged by the testing. Designs created with the engagement of the intended users will more likely be adopted. User-centered design involves the users just in the testing phase, participatory design involves the user in both the testing and design phases, and user empowerment involves users in every phase of the design cycle.
User empowerment requires that the users have self-determination and the technical education needed to participate fully in the design cycle. Self-determination means that the person with a disability has the power to make change, and in this case solve their own accessibility problem. Education mean they have the wherewithal to design, build, and test their solution. Such individuals are not waiting for someone else to solve their accessibility problem, but can do it themselves with the help of allies.
Demographics, equity, and quality all need to be considered when thinking about accessibility. Demographics refers to the large segment of the population that have disabilities. Equity refers to the concept that this large segment should be included and have power. Quality refers to the idea that better solutions to problems often come from diverse approaches to the problem. Disability is one facet of diversity. My closing thought can be stated succinctly that research fields need more people with disabilities because their expertise and perspectives spark innovation.
Presented by Aaron Kline, Stanford University
Many students with autism cannot read people’s facial expressions and gauge emotions. Technology in the Autism Glass Project, which works similarly to Google Glass and is connected to a smart phone, will read people’s emotions and feed that information back to the wearer by showing the expression as a word or emoji back to the wearer. We are also testing different audio feedback options. When looking at people’s faces from different directions in larger groups, it is difficult for the technology to read people’s facial cues.
The technology also records different interactions and a viewer can go back to review these interactions and read facial expressions again, with their parents or others. This technology is aimed at increasing people with autism’s facial engagement. It gives people with autism the tools and empowerment to learn and grow in social situations. There are options for children to play with games around facial cues and expression to learn in a game setting.
We ran a study where students wore our technology in social settings. Many participants became more likely to look at people’s faces and engage with facial expressions. Students became more comfortable with the headset after wearing it for a length of time and weren’t overwhelmed by visual or audio feedback. They expressed a desire for more gamification, feedback and rewards, and personalization. More advanced students also wanted levels and more ways to challenge themselves with the technology. We have now moved on to randomized control in future studies.
Our project team currently does not include any people with autism. In the future we have to include people with autism in the design, development, and evaluation. As seen in other projects, having the students involved in the design of their technology makes them more excited to wear it. Furthermore, we are exploring other uses for this technology, including reading people’s levels of interest in meetings or showing what content is in pictures or real life to someone who is blind.
Presented by Prasun Dewan, University of North Carolina Chapel Hill
Accessible cyberlearning should address not only delivery of knowledge but also creation of learning-inducing artifacts. Our research involves systems that (a) allow both textual and visual user-interfaces to create artifacts, automatically translating between two; (b) use machine intelligence to detect task difficulties and communicating this inference to those who can help with the task; and (c) use machine intelligence to automatically recommend solutions to difficulties. Such systems have the potential to increase accessibility for workers and/or helpers with visual impairments, limited motor skills, and autism. Investigating this potential requires getting enough data for both training the machine-intelligence algorithms and evaluating their impact on task creation and learning.
Our work addresses difficulty resolution and spans two projects:
Difficulty Detection in Programming: We are building a system that uses machine learning to automatically determine if programmers are facing difficulty, conveys this information to interested potential helpers, and provides an environment to offer help with the problem.
Difficulty Amelioration in Data Science: Data science involves connecting programs into workflows. Traditionally, this connection has been done using command languages, but because these are considered difficult to learn and use, some modern systems offer visual alternatives. This project is using machine learning to automatically recommend workflow steps to users in difficulty.
Can these cyberinfrastructure projects on ameliorating difficulties make learning and teaching more accessible? We say yes, based on several hypotheses below.
More impact on challenged populations: Our programming studies with the average population found that difficulties were rare (which is to be expected if problems are matched to the workers) but took long to resolve. Arguably those who face atypical challenges will (a) encounter certain kinds of difficulties more often, especially if instruction does not accommodate these challenges, and (b) take longer to resolve difficulties. Hence, digital support for difficulty resolution should have larger impact on atypical populations.
Second pair of eyes more effective for visually impaired: Our programming studies also show that the vast majority of the fixes involved a helper recommending change to a single line of code, which took the workers much longer to identify on their own. This means that the time required to make the fix was a small fraction of the time required to read the code to find the problem. A second pair of eyes of a human or system should be more effective for visually impaired programmers using a screen reader to find the “fix needle” in a large “code haystack.”
Difficulty inferences useful for autistic/visually impaired helpers: In a face-to-face programming lab, an autistic or visually-impaired helper who has difficulty reading faces to discover confusion, can use automatic difficulty detection to find struggling workers too shy or flustered to ask for help.
Command languages more useful for visually impaired: A simple workflow composition task of connecting the output of a program to the input of another involves (a) typing a few characters in a single command line, and (b) interaction with six screens (forms/menus) in a visual system. Consistent with the accessibility principle of ensuring that content is accessible using the keyboard alone, command languages are more appropriate for visually impaired workers who can master them as they require smaller read/write ratios to perform the same task.
Polymorphic workflow composition more accessible: Based on the accessibility principle advocating multiple ways of obtaining the same knowledge, supporting and translating between text-based and visual user interfaces for workflow composition should increase accessibility by accommodating multiple forms of challenges, and allowing problems to be solved collaboratively by people with different abilities.
Automatic recommendations for visual impairment and motor-skill limitation: Automatic recommendations are more useful for those (a) with limited motor skills as they do not have to use the keyboard or mouse to enter the recommended information, and (b) visual impairment, as they do not have to read documentation to determine the recommended information.
Research to investigate these hypotheses faces the problem that it is difficult to get enough subjects from atypical populations to gather (a) training data for developing the machine-learning innovations, and (b) usability data from our innovations. Our expectation is that training data from typical populations will also be useful for predicting and ameliorating difficulties of atypical populations. Longitudinal field studies of a few subjects are an answer to (b).
Presented by Shiri Azenkot, Cornell Tech
3D models are very important learning tools. With 3D printing, there are even more 3D models available. There is huge potential in using 3D printers to teach, especially to portray visual materials to students with visual impairments. Visually impaired students may be able to better see a building, a terrain, and or a 3D tactile globe that is more accessible. However, with a 3D globe, you may lose information such as country names, and differentiation of countries. So we developed a tool kit that tags (Markit) and senses (Talkit) models. In Markit, you can download a model and attach labels to different parts. Then, after printing the model, Talkit will use the camera on the device and read those labels on the model marked up in Markit. Talkit uses the keyboard to choose which model, reads hand gestures, and responds to speech output.
We ran a study to see how teachers could use this technology. Three different teachers of the visually impaired over six weeks developed models with their students: A volcano, a plane, and a small map, and could incorporate sound effects as well as stating what the part of the model is. The images on the screen could also show high contrast visuals with accompanying descriptions.
Presented by Sofia Tancredi, University of California, Berkeley
Math instruction is moving in exciting new directions. Designers and researchers are recognizing and expanding the use of whole-body movement, gesture, and manipulatives for learning math concepts. This movement is inspired by a paradigm shift in the philosophy of cognition from computational models of cognition (input, processing, and output) to embodied cognition models, which see our bodies and interactions with the environment as centrally constitutive of how we think and learn.
As movement-based learning activities expand, it is important to address the accessibility of such activities to all students. One critical and generally overlooked parameter is that of sensory regulation.
Individuals have different sensory needs in order to attend and learn. Sensory processing exists on a spectrum based on neurological threshold. Individuals with a high threshold are less sensitive to sensory input and need more sensory input to stay regulated. For example, one student with math difficulties that I worked with in 2010 would become exhausted whenever he tried to work at a desk. However, when this student had access to sensory regulation tools such as a balance board that provided amplified sensory input, he was able to focus and engage with math learning for long stretches. Individuals with low neurological threshold are more sensitive to sensory input. Sensory differences are associated with ADHD, ASD, mental and emotional disorders (OCD, schizophrenia), and genetic syndromes (Fragile X), and have also been linked to academic performance.
So how might students with diverse sensory regulation needs access math embodied design? Some key questions to answer towards this goal are 1) How can we both serve students’ sensory regulation needs and include them in learning through movement (that is, give a student a balance board, but also have them engage in a walk-the-number-line activity)?, and 2) How can we accommodate different and often opposing sensory profiles?
I propose that the answer to question 1 lies in the integration of conceptual learning and sensory regulatory affordances of movement, or what I call sensory regulatory embodied mathematics design. An example from a current project is a walk-the-number-line activity adapted to high neurological threshold students through the wearing of ankle weights. In this example, the weights play the dual function of (1) providing regulatory sensory input to the proprioceptive system, and (2) providing sensory input that is relevant to the learning movement. Rather than engaging in competing regulatory and conceptual learning activities, sensory needs can be met harmoniously through task-relevant sensory input. In cyberlearning design, sensory inputs (particularly to the vestibular and proprioceptive sensory systems) might take the form of vibration, whole-body movement, weights, rotation, or orientation changes. These dimensions of movement learning activities need to be adjusted differently for students who need more or less sensory stimulation. Adaptive cyberlearning tools are a promising pathway towards achieving this.
As movement-based cyberlearning activities proliferate, these are poised to improve or exacerbate learning access for students towards both ends of the sensory spectrum. Which occurs depends on our ability to intentionally design sensory dimensions of learning activities for sensory diversity.
Presented by Mike Jones, Brigham Young University
Deaf students who primarily learn and speak in sign language can find it challenging to look at visuals while also using an interpreter to relay the otherwise verbal instruction and information. How can sign language be watched while also looking at models or away from the speaker?
There are foundations in multimedia learning (Mayer, 1998; Mayer, 2005): Students learn better when hearing instruction while viewing visuals. Do deaf students learn better when viewing an animation accompanying by sign narration rather than captions? Do deaf students learn better when the signer is closer to the visual aid verse further away?
Students use a head-mounted display in the form of eyewear to see the signer while looking at other visuals. The signer can be anywhere (same room, another room, pre-recorded, etc.) and watch a presentation or visual aid at the same time. This may be especially helpful in museums, planetariums, or other places of learning outside the classroom that have been historically hard for deaf students.
We tested various types of equipment and where the signer would be view in the equipment. We studied how it would be done with split attention, how the signer position mattered, and how the fit affected learning. In a planetarium, we focused on how the signer helped the student understand the material either through a head-mounted display or projected on the planetarium itself.
Presented by Lorna Quandt, Gallaudet University
Signing avatars have the potential to be a powerful communication and accessibility tool. They are programmable, responsive, iterative, create digital storybooks, online courses, and can share content in American Sign Language (ASL) online. Online courses could be aided in receiving presentations and other help with an online avatar that can sign in ASL. Thanks to a recently funded NSF EAGER grant (Signing Avatars & Immersive Learning, SAIL), we are now working on a project to further develop these signing avatars and place them in a virtual reality environment to teach users ASL. This virtual reality environment will create an immersive, embodied learning experience.
Our avatars are designed from actual motion capture recordings of fluent signers. We use this data to build avatars which resemble fluid signing, instead of the unnatural signing that comes from computer-based models. These avatars can be used to teach people how to sign ASL in a virtual reality learning environment. This system is based on principles of embodied learning. Students learn better when they can use their bodies to learn, and our new ASL learning system will harness this fact to create a better way to learn ASL. Even more, virtual reality and gesture tracking will allow your own hands (in virtual reality) to demonstrate ASL from a first-person perspective. In SAIL, a student will be able to interact with virtual teachers and see their own virtual hands sign in response. Currently, SAIL is for teaching ASL to non-signers. But eventually, it can open up to a large population and other applications.
Presented by Ray Rose, Online Learning and Accessibility Evangelist
We were asked to do a webinar for the United States Distance Learning Association. We asked them to include real-time captioning, but they said it was too expensive. So we chose to use Google Slides with a transcription.
If you convert a PowerPoint to Google Slides, captions will appear as it listens to you. This means there is no excuse to not have an accessible meeting. If you have your slides paired with Google captions, it becomes more accessible—these captions may not be perfect, but they allow the viewer and listener to gain more context then they would have before.
There is no extra cost for using Google captions. All you need is a microphone in your computer to record speaking and to turn on the widget. The captions are relatively accurate, compared to other auto captioning services. If you use Zoom or another lecture recording service, it can save and record the captions as part of the slides, though it does not create a separate transcript for the captions.
Facilitated by Sheryl Burgstahler. This panel featured students and instructors with a variety of disabilities that included those related to sight, mobility, hearing, and learning. Below are questions posed and a summary of answers provided by the panelists.
Below are participant responses to brainstorming sessions included in the CBI.
Participants represent NSF-funded Cyberlearning projects as well as those that work to increase the participation of people with disabilities in STEM. Members of the project planning committee helped design the workshop and project products. The project evaluator tracked activities and collected data on outputs (e.g., products, participants) and outcomes (e.g., changes made) as a result of participation.
The following individuals participated in the CBI.
Shiri Azenkot
Principle Investigator
Collaborative Research: EAGER: SCIENCE: Systemic Cultivation of Inclusive Equitable Nurturing Classroom Ecology
Sheryl Burgstahler
Principle Investigator
AccessCyberlearning
Jill Castek
Associate Professor
Synthesis and Design Workshop: Principles for the Equitable Design of Digitally-Distributed, Studio-Based Stem Learning Environments
Dan Comden
Accessible Technology Specialist
AccessCyberlearning
Lyla Crawford
Program Coordinator
AccessCyberlearning
Bria Davis
Science through Technology Enhanced Play (STEP)
Gaby de Jongh
Accessible Technology Specialist
AccessCyberlearning
Prasun Dewan
Professor
Collaborative Research: CyberTraining: CIU: Towards Distributed and Scalable Personalized Cyber-Training
Shari Gardner
SRI
Fengfeng Ke
Associate professor
EXP: “Earthquake Rebuild” - Mathematical Thinking and Learning via Architectural Design and Modeling
Mike Jones
Associate Professor
Exploring Augmented Reality to Improve Learning by Deaf Children in Planetariums
Aaron Kline
Mobile Development Lead
Autism Glass: Design Challenges and Strategies for Targeted Audiences
Richard Ladner
Principle Investigator
AccessComputing
Elizabeth Lee
Publications Coordinator
AccessComputing
Katrina Martin
Education Researcher
SunBay SLE
Lorna Quandt
Assistant Professor
Signing Avatars & Immersive Learning (SAIL): Development and Testing of a Novel Embodied Learning Environment
Hadi Rangin
Accessible Technology Specialist
AccessCyberlearning
Meaghan Roper
Accessibility Assistant
Inclusive Classroom Pedagogy and Practices
Ray Rose
Designer/Contributor/Guest speaker
TxDLA Digital Accessibility Certification
Sofia Tancredi
Doctoral Student
SREMA: Sensory Regulatory Embodied Mathematics Activity
Terrill Thompson
Accessible Technology Specialist
AccessCyberlearning
Bingran Wang
Online Course Coordinator
Promoting Online Course Accessibility in Georgetown University
The AccessCyberlearning 2.0 Community of Practice engages with Cyberlearning projects on how new technologies and strategies for the delivery of online instruction can be made accessible to students and instructors with disabilities. Send a request to join to doit@uw.edu.
Other communities of practice hosted by the DO-IT Center can be found on the Communities of Practice page.
The DO-IT website contains
DO-IT and AccessCyberlearning maintain a searchable database of frequently asked questions, case studies, and promising practices related to how educators and employers can fully include students with disabilities. The Knowledge Base is an excellent resource for ideas that can be implemented in programs in order to better serve students with disabilities. In particular, the promising practices articles serve to spread the word about practices that show evidence of improving the participation of people with disabilities in postsecondary education.
Examples of Knowledge Base questions include the following:
Individuals and organizations are encouraged to propose questions and answers, case studies, and promising practices for the Knowledge Base. Contributions and suggestions can be sent to doit@uw.edu.
To learn more about accessible online learning, universal design, and information on making your technology accessible review the following websites and brochures:
AccessCyberlearning works with current and future cyberlearning researchers, technology developers, and instructors to inform their research with what is known about student differences/disabilities; design innovative learning technologies and teaching strategies that are welcoming to, accessible to, and usable by everyone, including people with disabilities; and ensure that project materials (e.g., websites, videos, curriculum) and activities (e.g., meetings, presentations) are welcoming to, accessible to, and usable by all participants.
AccessCyberlearning activities are designed to engage Cyberlearning projects and projects with similar goals in ways that explore how current knowledge and studies about people with disabilities can inform cyberlearning research, learning technology development, teaching strategies, and outreach. The goal is to make online learning opportunities high quality as well as welcoming to, accessible to, and usable by the broadest audience, including students and instructors with disabilities.
By addressing disability-related issues, AccessCyberlearning will help cyberlearning researchers, technology developers, and instructors work toward the ultimate goal of making the learning experiences of all students more effective and the online teaching experiences available to more potential instructors. Participants in AccessCyberlearning will become better prepared to make
technological advances that (1) foster deep understanding of content coordinated with masterful learning of skills; (2) draw in and encourage learning among populations not served well by current educational practices; and (3) provide new ways of assessing understanding, engagement, and capabilities of learners.
For more information on AccessCyberlearning, consult our website.
AccessCyberlearning 2.0 capacity building activities are funded by National Science Foundation under Grant #DRL-182540. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the CBI presenters, attendees, and publication authors and do not necessarily reflect the views of the National Science Foundation or University of Washington.
DO-IT
University of Washington Box 354842
Seattle, WA 98195-4842
doit@uw.edu
www.washington.edu/doit/
206-685-DOIT (3648) (voice/TTY)
888-972-DOIT (3648) (toll free voice/TTY)
206-221-4171 (FAX)
509-328-9331 (voice/TTY) Spokane
Founder and Director: Sheryl Burgstahler, Ph.D.
Project Coordinator: Lyla Crawford
© 2019 University of Washington. Permission is granted to copy this publication for educational, noncommercial purposes, provided the source is acknowledged.