Gutenberg Accessibility Audit Webinar
In May 2019, WPCampus released the results of an accessibility audit of the WordPress block editor, also known as Gutenberg. Following the release of those results, we were very excited to host a public webinar and question-and-answer session to discuss the results with our accessibility testing vendor, Tenon LLC.
About the webinar
The webinar took place Monday, May 13, 2019 at 12:00 PM CDT.
We were proud to partner with Tenon in offering this webinar as a discussion of accessibility and Gutenberg, but also as an opportunity for professional development. The wealth of information provided in Tenon’s report is relevant to any complex web interface, and this webinar was an opportunity to learn first-hand from a deeply knowledgeable team of accessibility experts.
This webinar was free and open for anyone to attend. Live captioning was provided during the webinar.
Recording
Transcript
[Intro music]
[Title: WPCampus: Where WordPress Meets Higher Education]
[Title: Tenon]
[Title: Gutenberg Accessibility Audit Q and A]
[Title: Download the report at https://bit.ly/2Q2J3V6]
[Title: with Rachel Cherry, Director of WPCampus and Brian DeConinck, Author at WPCampus]
[Title: and Karl Groves, Founder and President of Tenon, Michael Beck, Operations Manager at Tenon, Helen Burge, Senior Accessibility Consultant at Tenon]
>> RACHEL CHERRY: Hi, everyone.
>> MICHAEL BECK: Welcome to this Gutenberg Accessibility Audit Q&A. I am Michael Beck, the operations manager at Tenon, and with us, we have Rachel Cherry and Brian DeConinck from WP Campus and Tenon’s own Karl Groves. And we’ll be doing a little structured Q&A. First, Rachel and Brian will be giving us a little agenda setup, and then Karl will discuss Tenon’s methodology and then go into the executive summary. And then we’ll open it up to questions. So, take it away, Rachel!
>> RACHEL CHERRY: Awesome! Hey, everybody and welcome! We’re excited you could join us. And we’re grateful to Tenon and their entire team for joining us today for this webinar. And we’re grateful for Josh, who is doing live captioning, so you should be able to turn that on inside the Zoom application. Our agenda, we’re going to go over the audit, the report, the executive summary, and talk about the process. And Tenon will take that part over. But, we will have room for questions at the end. So, Zoom does have Q&A tool. If you have any questions during the webinar, please submit them using the tool.
And Brian and I will be following the chat to make sure everything’s covered. So, with that said, welcome everybody, and I’ll hand it back over to Michael.
>> MICHAEL BECK: And I’ll toss that ball right over to Karl!
[Laughter]
>> KARL GROVES: All right. Cool. So I’m Karl Groves. I’m the founder of Tenon. As Rachel mentioned, we have some of the Tenon team here who were handling the audit, the primary testing duties.
So, before we go into the executive summary and results, I’ll talk a little bit about how we got the gig and also a bit about the foundational parts of Tenon’s audit methodology. If you’ve read the executive summary report, that’s a great start for understanding the actual overview of the problems themselves that were discovered and the status of the product. If you want to really spend a lot of time reading, then you can read the full report.
WP Campus had issued an RFP. We responded to the RFP. And were chosen out of, I guess, a group of other excellent accessibility companies out there. And we started doing our testing earlier in the year. Our methodology is pretty closely aligned when we’re either testing a website or a web-based application. And the beginnings of that is the scoping of the audit. And for the scoping of this, obviously, what we wanted to do was get a series of samples of the Gutenberg Editor. And because it’s more application than website, it was a bit more straightforward for us in terms of identifying what we were going to be testing.
Our goal, of course, was to test all of the things that people are going to be touching. Users who are going to be creating content, adding content, reviewing content, so on and so forth. We wanted to make sure that our scope of the audit touched all of those things.
We called those things, “components.” And those components are added into a test plan. And the test plan is sent back over to the client, in this case, WP Campus, for approval. And then our testing begins.
Now for us, the way we do our testing is not ad-hoc. We have a pretty rigorous test methodology. And that includes the use of a checklist. The checklist itself is based upon — I’m going to say over 200, I think if I recall, 260 individual check items that cover the WCAG 2.1 success criteria. That checklist is derived from cumulative decades of testing experience from myself and from the other folks on the Tenon team, like Helen and Mallory and James and others who are on the team. We create this checklist as a way of making sure that we are going to be testing everything that we can. So, we have several tests under images, for instance, and we want to make sure not only that the images have alt text alternatives, but that those alt text are meaningful and not redundant with other text nearby. You get the point. What we’ve done is converted basically the WCAG success criteria into atomic, testable items. And we subject each component to that checklist.
Each thing is marked with pass, fail, or not applicable. When we find a thing that is a failure, we log that into our issue tracking system. In doing so, we document what the problem was, where the problem was, why it’s a problem, who it’s a problem for, the code that caused the problem, and then we also create remediation guidance. So, the remediation guidance that we provide is aimed at solving that atomic issue. We create the remediation guidance in a way that’s very clear and concise and aimed at solving the exact issue that was found. And we also, where possible, document the exact code changes that need to happen as well. So, in doing so, we create a complete list of all the issues that we’ve found and all of the things that need to get done to fix it.
So, we also tend to go through, as we go through our checklist, we also document where these things in terms of the components list, where these things reside in the components. A lot of times, especially on public websites, a lot of times what we will find is that there’s an opportunity for standards to — I’m sorry, not standards — duplicate patterns to reveal themselves to us during the testing. In other words, if a developer has a problem with creating proper field labels on forms, we’ll find that almost all of the forms on the site will have those same sort of problems. So, sometimes we will find that. In the case of something like Gutenberg, there were definite patterns, but not so much as you would find in a website itself because Gutenberg has pretty finite and closely-scoped application rather than just having a bunch of pages that have duplicate issues on them; it’s more of an application.
So, another thing that we did is that we also did some usability testing. During the usability testing — I’m going to share the report here. So, you should be seeing now my executive summary document here. I’m going to skim through real quick. Also, to talk about the UX findings and then I’ll go back through the report itself. Because an important component of this audit was to do usability testing, as well. What we wanted to do of course, is find out can people actually accomplish tasks on the product. So, we did some usability testing.
We got nine participants for this testing session. All of those test participants were people who have experience in doing website maintenance, website content creation, so on and so forth. They all had direct and current experience using WordPress itself, which is, of course, important in this case. We had them perform three tasks. Creating content, editing content, and amending post options. And we engaged them in a think aloud protocol. If you have ever done usability testing, a think aloud protocol is basically where the participants verbalize how they’re feeling or what comes to mind as they’re completing the task. So, they basically tell us what they’re doing, how they’re doing it, so on and so forth. It’s not quite directed in terms of prodding them for “How do you feel now,” but more let them verbalize what’s they’re doing and thinking as they do it. Also, at the end of each test they completed a SUS. A SUS is a System Usability Scale questionnaire. The SUS has a number of questions in it that are aimed at measuring effectiveness, efficiency, and satisfaction.
So, with that, I’m going to go ahead and go back through the executive summary report. I’m going to skim through here and there, just to discuss a couple of the more important takeaways from it. And then we will have the actual Q&A session here. I think I’m going to enlarge this a little bit so we can see it a little better.
This is the Word format. The deliverable was a PDF, an accessible PDF. As a matter of fact, I’m going to shout-out at our friends at AbleDocs for making sure that thing was accessible when we delivered it.
So, we’re going to go through here. The executive summary, of course, gives us an introduction and an overview of the findings that we found. And a couple key findings, especially with respect to the usability testing, is that participants only had an average completion rate of 63% across all tasks with an average SUS score of 46. So, in terms of how that works for SUS score, that’s at best, a poor to OK result. And there was a big difference in the types of disabilities when it came to success that we’ll talk about in a moment as well.
The technical review found 90 issues across the 16 components. There’s an average of 18% of failed check items per component. Again, going back to our methodology, when we were going through the actual checklist, that meant that 18% of those items were failed, on average across per component. Sixty-three percent of the issues impacted at least three user populations. Now Tenon internally has a prioritization methodology that we provided. We’ve actually built it into the Tenon product. And, so, what we consider in the Tenon product to be a severe failure is anything that impacts two or more user populations. Sixty-nine percent of those issues were medium or high severity. And 53% of those issues were WCAG, level A items.
The ability to create accessible markup with Gutenberg is a bit of a mixed thing. For the most part, it’s pretty good. A lot of attention has been paid to the creation of proper headings and all that sort of stuff. One of the things that’s important to note, however, about the usability results that we found was that the usability challenges create challenges in creating the accessible markup, as well.
In other words, as we note in the second paragraph listed there, we say it’s still relatively easy for users to unknowingly create accessibility problems. And a lot of that is due to some of the other technical challenges as well. And I think on a personal level, that was a big thing for me, which is, that there’s a significant correlation between the technical problems that were identified in the manual testing and stuff like that. The correlation with the failures that people had. I mean it seems like it would be relatively obvious, because if there’s a technical problem that we think is going to impact a user who is blind, and then they fail with the actual tasks, then that is good correlation. But to me, I think the strength of that correlation was something that I actually didn’t expect.
So going down here, we talk through the deliverables. I’m not going to belabor those. We’ll talk about those if anyone has any questions. These are tables that break down the SUS data. So, we can see here the SUS average scores across populations and then the completion rate. We’ll see here, for instance, that the completion rates are quite low in certain cases. Visually impaired, zero percent on task 1, and so on. So, this is a big deal. Task 3, only 16% for our blind participants in that task. So these are pretty important results.
Now, these visualizations here: Figure 1, figure 2, and figure 3, presented on page 10, I believe, does indicate — this is actually not something that we created. This is an asset or a visual aid provided for scoring SUS — does provide a pretty good indication of where things are. And you can see here on task 1, for blind users, this experience is considered poor. The acceptability range is verbalized as not acceptable. And that is a pretty consistent thing for blind users and for dexterity-impaired users, that they’re going to score quite low in the SUS. This is task 3. You can see here how far down that is. Under this task 3, it’s considered “the worst imaginable” experience. By the way, there’s a link in the report itself that describes what the SUS is for any of those who are unfamiliar with it.
Next, we get into the technical report. And in the technical report, we talk about what our methodology is. We show here what that checklist looks like as we work through it. So, we talk through the outcomes and all that sort of stuff. So, let’s go through this and show a couple of the visual aids in the report to sort of give an idea of where things lie. So on this one, the Check Item Outcomes. This one talks about which items were scored N/A, which ones were scored pass, and which ones were scored fail. So, you can see here 63% percent of the check items here are going to be not applicable to that particular component, and, of course, the reason why would be if those components did not have a specific feature.
For instance, we have probably about a dozen things that are about media. So if there was no content around media, then of course, that’s going to be considered not applicable. Just got a note that people want me to zoom in a bit better. So, seven percent of those items are considered a fail.
Going down again, the rate of failure across components: we can see here that the average was 18% and the median was 17%. So, that’s pretty close in terms of getting an idea of across the board, how they performed. The maximum failure was 31%. Going through here, this is a vertical bar chart indicating the rate of failure by component and the rate of failure by component talks about the percentages of those and you can see here where they were. You can see again, going across here at the 15-16% range is where things tend to lie. Only one component was under 11%, and the 31% that we discussed before was here, it was the media dialogue. In the PDF itself, there will be a table that discusses the rate of failure by component. And this also, as you can read through their names, will give you an idea of where things are in terms of the workflows, as well. So the Publish and Unpublish workflow had a failure rate of 19% and so on.
Issues by component. This is going to be the actual raw number of issues by component. We can see some of these are going to be quite high. The most was in Edit Media and the Media Dialogue and that, of course, correlates with the other information there.
Issues by issue type is also an interesting one to note because the issues by issue type discusses whether we regard that failure to be an error — I mean a bug or a warning. For us, the warning type things are things that are going to be what we view to be a potential problem. They may not have a direct WCAG success criteria tied to them. But, rather, our own internal experience has shown it to be the kind of thing to watch out for, so, that’s why that’s there. For us, we kind of do believe everything is a bug, but again, when it comes to some of the issues, we don’t have a direct correlation with the WCAG success criteria. So, that’s the biggest separator there, and, as we can see here, 77 out of the 90 things were bugs.
All right. Now, next up is issues by content type. As we are going through the system and we’re doing our testing, our checklist is organized by the content type. And that is going to be organized as we can see here: text content and typography, role, states, values, those sorts of things. As we can see here pretty clearly, and I think this is also backed up again by the usability studies that we did, was keyboard accessibility and focus control was by far, the area that had the most issues: twenty-two issues under keyboard accessibility and focus control. Other big ones here: Use of color, color contrast stuff, text content and typography, and then just general document structure, listed there.
We also track down or we also report on the type of platform that could be experiencing the issue. For us, we did not do specific testing on all of the specific platforms. Rather, it’s our own experiential knowledge that would be reflected here. Things related to Android and Talkback, iOS and VoiceOver, Mac and VoiceOver, Dragon, and so on and so forth. And you can see here that the important takeaway here is that 70 of the 90 issues were considered to be universal in nature. What that generally means is that the issue is pervasive enough that you should expect that it would cause issues for all users on all platforms. Some of these other ones would be types of things where you would expect issues relating only to that platform, such as this. This is a great one: high contrast mode. We know from our own experience the kind of things that would have problems with high contrast mode that would be rather unique to high contrast mode which is why they would be here and not under universal and so on.
By Population is another metric that we track and this is going to be based upon, again, our experiential knowledge and our experiences with specific types of users during our careers. And, so, we can see here the breakdown of those. What I think is important to note about this, again, as we indicated earlier, that a large number of the issues that we found were impacting multiple populations. So, I think it was three populations was impacted. And that’s a big deal, because, and it’s a big deal to understand, because of the nature of accessibility, you may find certain things that would affect, like a motor impaired user and a blind user, because it’s keyboard accessibility related. Or blind, low vision, and motor impaired, again, because it’s relating to things that will impact multiple populations due to, say, keyboard accessibility, and focus control, and that sort of thing.
So we can see here, you know, out of the 90 issues, 82 of them could have an impact on cognitive impaired users. So, this is where we get into this chart here, which is the Number of Populations Impacted. Again, 49% of those issues going to impact three populations. That’s kind of a big deal. Only 5% impact only one. And 14% of the issues impact four populations. In other words, they’re going to affect everybody.
Severity is another one that we’ve separated out. And again, 18% of those were marked as high, 51% of them were marked as medium. So, what we’re saying is that 70% of the issues were medium or high in severity. The other thing to note is that low severity doesn’t mean it’s not a problem. In other words, when it comes to Tenon’s philosophy around accessibility, if we cannot articulate a specific person or specific type of person who would have the issue, then we’re not even going to log it. And that’s an important distinction. We’re not going to have the issue logged if it’s not an actual problem for real people. That would be meaningless.
Finally, the Issues by WCAG Principle. Again, pretty obvious. Operable is going to be where we’re going to see keyboard accessibility issues. And, of course, that’s why it’s so high. Perceivable, again, that’s relating to color contrast, and alt text for images and all that sort of stuff, that’s going to be a big deal, too. We can see the breakdown there. WCAG Level, 53% of those were level A, 39% were AA, and 8% were AAA. So again, going back to the breakdown of this in respect to WCAG conformance and with respect to severity, that’s a big deal. We also broke down the WCAG success criteria. This is a really long horizontal bar chart. And we can see the breakdown there. I’m not going to go through all of those.
So here we go. Our conclusions and our recommendations. This was kind of hard for us when it comes to writing this in a way that is accurate, yet diplomatic, and, I think that the wording that we have here is a perfect balance of diplomatic and accurate. And that is to say that there are, as stated here, “significant and pervasive accessibility problems, he likes of which amount to a step backwards for users with disabilities over the legacy editor.” That sentence is particularly important to be mindful of. The testers that were on this particular audit job, including myself, all have lots and lots of experience with WordPress. I believe my own experience with WordPress goes back to probably the last decade or more. I think I did my first theme about ten years ago. And both James and Mallory have WordPress experience and I believe Helen had WordPress experience prior to this, as well. So we’re very familiar with the legacy editor, and of course, through this testing and our own use of WordPress experience, with Gutenberg.
So, for a user with disabilities, the legacy editor is easier to do things with. That’s not to say it’s perfect, because rich text editing, whatever you want to call it, these kind of content editors do have some challenges that are going to be inherent to the nature of trying to embed an editor into a web page, dealing with some of the focus issues with that, and so on and so forth. So, nothing is going to be perfect in that area, anyway. But, the data shows from both our usability testing and from the technical testing that these problems are pretty extensive. And the kinds of things that could have been headed off earlier in the process. There is a lot to say about the nature of the problems that were experienced by people in the usability studies that really are reflective of shortcomings in the interaction design itself. In other words, I would say that it is likely that if we had paid a bit more attention during the design phase for how things work for the users, that could have helped quite a bit, even in light of some of the technical issues. The truth of the matter is that when it comes to usability, people will tend to fight to overcome things when they have a vested interest or need to do so. And I don’t know if these participants could have done that because of some of these challenges.
That said, these are things that can be fixed and that can be dealt with. And I think that a really big thing to take away, as well. Is that I know that there’s been a lot of talk about this audit out there and that everything seems a bit doom and gloom, especially for people who are new to accessibility. But the truth is that all of these things can be fixed. Some of them will require design changes or interaction flow changes. That’s just how it goes when it comes to product management. You do those things anyway. I believe firmly that these are not insurmountable problems and that, thankfully, it appears that the community has stepped up in terms of their willingness to get together and work through these issues.
So that’s it for the conclusions. I am going to plod through any questions here in the Q&A.
Looks like there’s two open Q&A issues. OK. So I’m going to say that Adam Lentz says, “Do we know how many items or issues have been fixed in this testing?” Mark mentioned that you can get some information on that here, and that’s going to be the accessibility items in the Gutenberg repo. So, that is definitely an excellent place to go and see if those have been fixed since testing and to determine where they are.
Next up, Thomas says, “Since we’re making conclusions that say we’ve taken a step backwards from the legacy editor, does that mean there’s a usability study on the classic editor that can be released to compare against?”
No. So, to Thomas, if you’d like to fund that usability study to find out about that, we’re happy to take that on. I think that one of the things that we can say is that because of our test participants who did participate in the usability study, they did have legacy editor experience. And they would be great people to ask. But, I believe that you would find that they would say that it is a step back from the legacy editor.
>> HELEN BURGE: Can I just add something in there, Karl?
>> KARL GROVES: Yes, please.
>> HELEN BURGE: Yes, basically, from a couple of the users that knew classic WordPress, they did ask me if they have to update to Gutenberg or could they stay on the old one. That was kind of saying maybe it’s not as usable to them.
>> KARL GROVES: Right. Just to reiterate…
>> MICHAEL BECK: I also had the same question on a couple of the usability tests that I did myself.
>> KARL GROVES: OK. So for those who are listening in, both Helen Burge and Michael Beck, who just answered that, were moderators on those usability tests, and both of them are saying that the test participants did say, “Can they stay with the classic editor?” I think that’s kind of a pretty clear answer to that question from before.
Bud says, “How long will it take for Gutenberg to be as accessible as the classic editor, or is that not likely to ever happen?”
I remain an optimist in this case. I believe that Gutenberg can be as accessible as the classic editor. I think it’s very likely that can happen. I think we just need to put the work in to do it.
So I don’t think that’s going to be too much of a challenge. It’s just whether the team who is managing that project are willing to do so.
Patrick Murray-John says, “Are there plans, funding for a follow-up after WP claims the issues are resolved?”
That’s a good question. I’m going to leave that to anyone else who wants to answer. As far as I know, I don’t know if there’s any plans to do that. OK. So the Q&A is empty, but there’s lots of stuff in the chat. So, are there any questions or anything else that I missed in the chat that people want to mention or discuss?
>> BRIAN DECONINCK: Karl, there are a few questions that were answered in text by some of the other folks from Tenon that might be worth talking about as well. In particular, Sarah asks if you can provide any information on what people struggled with specifically in the usability testing. If either Michael or Helen would like to speak to that, I think that might be useful.
>> KARL GROVES: Sure. Helen and Michael, would you like to go?
>> HELEN BURGE: OK. As I said in the chat, some of it was more for those who couldn’t see the screen. The dynamic content wasn’t being heard. So, for instance, quite often when you move focus to some of the fields, other content would then show. But, the user wasn’t aware that the screen was always changing as they navigated around. So that made it difficult. And you’ve got the people with keyboard only navigation that found they can’t get to certain parts because what should have been the logical order wasn’t the logical order visually. So, they got lost as they were navigating the screens. And one of the interesting ones that was highlighted was the fact that there’s lots of, saying, maybe multiple blocks within a page, but none of the blocks are given meaningful names for the user to know which block is which. So you don’t have first block, second block, third block as a simple sort of label. There was also an instance where one of the tasks in the final survey questions was to change the settings for the page. And there’s settings in the left-hand panel, and settings in the top menu, and settings within the actual main page. So which settings do they change or do they open to get the right one to change what they needed to?
>> MICHAEL BECK: One of the big issues I remember is the screen readers giving feedback as to where people were, in that context, like Helen said: the first, second, third blocks. As you added more and more content, people would just get lost. Those that couldn’t see the screen, they would just get completely lost in the page itself, because they had no context as to where they actually were, unless they actually went in and started to reread the blocks themselves. The one user I remember talking about it saying how he would just automatically go to the HTML editor because it would just be completely easier for him to use that as opposed to using Gutenberg, which kind of defeats the purpose of Gutenberg.
>> HELEN BURGE: Yes. Because the dynamic changes, as well. What was relevant one second, you navigate away, and suddenly it’s no longer available. I had one user getting so frustrated, because she couldn’t get back to what she needed to do because she had gone past it. So again, it was relying on people being able to see what was going on. It wasn’t portrayed to people that wanted to hear it.
>> MICHAEL BECK: Even for those with sight, one of the users had ADHD and there were lots of warning boxes. Even as the page itself would auto-save, the little constant little notification, which was just a little blink, would be constantly distracting to her. And that was something that she struggled with; it was a constant struggle for her because it kept blinking: “Auto-save, Auto-save, Auto-save.” And that was something that she had a problem with.
>> KARL GROVES: Getting back to some of the questions, Patrick says, “Do you have any guidance or suggestions for plug-in developers who creating their own Gutenberg blocks?”
I think that there’s nothing that you can do beyond that can control Gutenberg outside of your own plug-in. In your case, what you really need to care about is whether your plug-in itself is accessible. And that’s kind of it. You need to follow accessibility best practices, and if you need to learn about accessibility, there are tons of awesome resources. One of the ones I always point people to are the resources from WebAIM. That’s webaim.org, and they have a fantastic mailing list, as well as a fantastic website with tons of resources that have been around since 2000, early 2000s. And still manages to be an excellent resource, so that would be one there that would go for. Again, just following accessibility best practices is really it.
Elaine says, “Are there any restrictions on sharing the audit results, such as screen shotting some of the findings or presentations at a WordCamp?”
Because WP Campus has been the one who is essentially the customer in this case, we’re going to leave that to them to determine what they believe to be the right way of handling it. From our perspective, from Tenon’s perspective, we are fine with whatever WP Campus chooses in that case. So, Rachel or Brian, do you guys want to answer that question, in terms of the sharing rights of the audit?
>> RACHEL CHERRY: Yeah. I was going to type it out. So, basically, WP Campus does not have any restrictions. We would love for you to share this report in an attempt to help advocate and spread education. We do ask that you attribute the report and link to it and give some information about it. We just ask for that in hopes that we don’t want this to be taken out of context and things like that. So, make sure you would reference it just like you would do any other kind of research.
>> KARL GROVES: Cool. Joseph’s question says, “I travel to all the Wordcamps across America speaking and teaching developers and designers to create with accessibility in mind from the beginning. Is there any bullet points you could mention that you think will make the biggest difference?”
You know, I think, again, the things that will make the biggest difference is to think about accessibility upfront. That’s got to be the message of this report, as well as almost anything. The challenge that I have personally with this report, and I think I speak for the rest of the team in this, is that for the most part, this has got an lot of attention that we don’t see with most of the audits that we do, primarily because the audience of the audits that we do is so small. And, so, to a lot of people, the Gutenberg audit might seem like a huge deal, like, “Oh my God, the sky is falling!” The reality is that for the most part, there’s nothing shocking in this. We see this all the time. I think, as we mentioned in the executive summary, we’ve collectively done over 300 audits in our careers. That would be me, Helen, Mallory, John, and James. Over 300 audits. And there was nothing in this audit that shocked us in terms of what we found. We didn’t see anything that we’d never seen before. And it’s kind of unfortunate that that’s the state of accessibility. The issues we uncovered when testing Gutenberg were issues that we would have uncovered by testing any product from any other customer that had not paid attention to accessibility.
This is definitely the same thing that we would have seen from any of our customers who we were doing their first audit. In other words, that there was not much attention being paid to accessibility in the design phase. There was not much attention being paid to accessibility during development. And there wasn’t much attention being paid during testing and remediation. There’s a lot to be said for the fact that it’s just a not accessible product, and that’s what we see all the time. So in terms of the types of bullet points is first off, you know, let’s start thinking about accessibility first when we start thinking about how people are going to use it. And, as a matter of fact, I guess one of the things I would say is that a lot of these could have been avoided had the wisdom of the WordPress accessibility team been heeded earlier on, there would probably have been not this many issues in the first place. Hopefully that helped and not confused more. Are there any other questions?
So, Joseph again asks, “I would love to learn and help in any way I can. Is there any contact information you’re willing to give out here? I know I can help move accessibility and WordPress forward.”
So, our information is listed on our website: tenon.io. We are here to do anything we can for any of our customers. WPCampus were the fine folks who helped us out. So, WP Campus could share their information as well?
>> BRIAN DECONICK: I’ll also jump in, Karl, and say if you have any questions that you don’t ask during this webinar, but you’d like to ask, we do have audit@WPcampus.org as an email address and we’ll post that in the chat, as well. Those are questions that either Rachel or I can answer, or we’ll pass to the Tenon team, as well.
>> MICHAEL BECK: There’s some questions, some people can’t see the questions in the Q&A box, so I’m just going to go through and kind of read what was answered. Curtis asked, “What should we make of the high SUS usability score on a test that had zero percent completion rate from one tester. Helen had answered that until the participate got blocked, they felt most items were usable to them, but then they could not complete the task as needed and they would need to learn more Gutenberg and how it works to be able to do it.
Mark had asked if any of the tests were recorded and can they be shared?
Yes, they were recorded and I’m currently editing a highlights reel, so we can highlight the specific usability issues for that.
Tammy asked if all usability participants were from the U.S. and primarily English speakers.
Most of them were. We had a couple Europeans and an Indian, both of whom were non-native English speakers.
Joseph had asked what testing software we used for the audit and Mallory had answered that we use assistive tech like screen readers, Windows high contrast, browser zoom, zoom text, as well as reading the actual code itself.
Steve just asked a question about the tools we use to track the issues. So I’ll let Karl take care of that.
>> KARL GROVES: Right. So Steve Woodson says, “Can you share the tools you use to track the issues you found? You mentioned you logged the issue, who it affects, remediation steps, and effective code. Is that through a proprietary system or something that can be purchased?”
Steve, we used a customized workflow in Jira. Because Jira allows you to create custom workflows and also add custom fields and all that sort of stuff, that is what we used for this, despite Jira’s otherwise horrible UX. We do really like its ability to be customized and that’s how we used this specifically. Our customized workflow has the custom fields and the other things that we need to gather the data and track it as well. And also, of course, report on it. Any other questions?
And then Joseph said, “How many hours did the report take to make?”
That’s a good question that I’m going to stop sharing so I can actually get the data on. All in all, including all the other things that we did, the usability study, the blah, blah, blah, 308 hours.
All right. So Damien says, “Any significant progress on the GitHub issues since the report’s release?” I’m going to leave that to Brian or Rachel who are probably paying more attention to that than I am, at the moment.
>> RACHEL CHERRY: If you open GitHub, I think 43 of them have been closed. Which is almost half, which is pretty amazing. I have been trying to follow along in the various WordPress Slack channels. For example, obviously the Accessibility channel, the Design channel and the Core Editor channel. And everyone is like super gung-ho about all of the issues and there have been lots of discussions and plans made and triage meetings. So, it does seem like everyone is super excited to address these issues and get them resolved. So I think we’ll see a lot more action over the next few weeks.
>> KARL GROVES: Cool. And that is also a good point made by Mallory in there, that a lot of the accessibility issues were identified by the WordPress Accessibility Team already in some form, which is definitely good. And when we log the issues into the repo, we log them as if nothing else had already been done. Primarily, because we want to ensure complete coverage and leave it to whoever’s managing the project to determine whether what we logged was duplicate or duplicate in spirit or something like that. Yeah.
>> RACHEL CHERRY: To add to that, if you were listening and you want to help address these issues, I would recommend joining one of those WordPress Slack channels that I mentioned. Joining the Accessibility team, the Design team or the Gutenberg team. There’s also a lot of these issues are issues that can then be taken and addressed outside of Gutenberg. You know, there were a group of issues that were related to the media pop-up, whatever we call that window. Dialog. There were a fair amount of those issues. They were part of that, and they were part of this audit, because that window is required to interact with some Gutenberg components. And, obviously, that’s a component that’s used in the rest of the admin as well. So, we can take some lessons learned from this report and go outside Gutenberg into the rest of the core software, and we can fix issues there as well. So, there’s a lot of value in this report that cannot only be taken and applied to Gutenberg, but can be taken and applied to WordPress Core. It can be taken and applied to plug-ins. It can be taken outside of WordPress. You could be building a whole other app entirely, and there’s information in here that’s super helpful for you, if you’re not familiar with these common accessibility issues to look out for. When I was reading it, there’s a whole section about forms and stuff. So, in my mind, I was like, this would be super helpful for all of the form plug-ins that we have in our ecosystem, that they could read over this report and make sure that they’re covering these things inside their plug-in. So, there’s lots of data and education in here for us as a community to take and apply, even outside of Gutenberg. So I would recommend taking that mindset as you read the report.
>> KARL GROVES: Yeah, I definitely agree and I guess that goes back to our earlier discussion where I mentioned we didn’t see anything shockingly new that we had never seen before. Like you said, when it comes to forms, you know, there’s a right way and a wrong way to do forms. So, if you were to go through that report and look at that forms section, there’s a lot of value to be had, even if it’s not Gutenberg related, just related to whatever else you’re doing around forms. Same thing goes for focus management and keyboard stuff, especially if anyone is creating any custom controls.
>> RACHEL CHERRY: Yeah, for me reading it, some of the really common issues that we were modifying default browser styles, for example, removing focus, or changing how focus looks, or there was a point where we were changing the default cursor and stuff like that. So, there’s a big lesson I think take away in that, in that, there’s a reason why these default browser styles and these default browser settings exist because that is what’s expected behavior and that’s what these accessibility APIs are interacting with and what assistive technology is interacting with. So, if we change those default styles to fit something that we want to see, we’re removing expected behavior from the rest of users. So, we have to think carefully about when we go and we change these default focus styles and default cursors and things like that, the problems we’re potentially causing for expected behavior inside of browsers and other user agents.
>> KARL GROVES: Yes. Absolutely.
>> RACHEL CHERRY: Yeah, just another reminder that we’ll have audit@wpcampus.org available if anyone wants to submit any follow-up questions, whether it’s about accessibility or the report itself, and Brian and I will try to answer or we’ll send to Tenon if we need to. And basically, there’s an audit page on our website wpcampus.org/audit, where we’re hosting all of this information.
>> KARL GROVES: Excellent. Thank you all for attending. Thank you, Rachel and Brian, for giving us this cool work to do. I always enjoy this kind of thing, especially when we can have a broader impact on the web. So, as before, thank you all for attending and talk to you soon. Bye.
>> RACHEL CHERRY: Thanks, everybody. Bye.
[Webinar concluded]