2020

How do you make a virtual event accessible for people who are blind or visually impaired?

When I started work on Sight Tech Global back in June this year, I was confident that we would find the answer to that question pretty quickly. With so many virtual event platforms and online ticketing options available to virtual event organizers, we were sure at least one would meet a reasonable standard of accessibility for people who use screen readers or other devices to navigate the Web.

Sadly, I was wrong about that. As I did my due diligence and spoke to CEOs at a variety of platforms, I heard a lot of “we’re studying WCAG [Web Content Accessibility Guidelines] requirements” or “our developers are going to re-write our front-end code when we have time.” In other words, these operations, like many others on the Web, had not taken the trouble to code their sites for accessibility at the start, which is the least costly and fairest approach, not to mention the one compliant with the ADA.

This realization was a major red flag. We had announced our event dates – Dec 2-3, 2020 – and there was no turning back. Dmitry Paperny, our designer, and I did not have much time to figure out a solution. No less important than the dates was the imperative that the event’s virtual experience work well for blind attendees, given that our event was really centered on that community.

We decided to take Occam’s razor to the conventions surrounding virtual event experiences and answer a key question: What was essential? Virtual event platforms tend to be feature heavy, which compounds accessibility problems. We ranked what really mattered, and the list came down to three things:

  • live-stream video for the “main stage” events
  • a highly navigable, interactive agenda
  • interactive video for the breakout sessions.

We also debated adding a social or networking element as well, and decided that was optional unless there was an easy, compelling solution.

The next question was what third-party tools could we use? The very good news was that YouTube and Zoom get great marks for accessibility. People who are blind are familiar with both and many know the keyboard commands to navigate the players. We discovered this largely by word of mouth at first and then discovered ample supporting documentation at YouTube and Zoom. So we chose YouTube for our main stage programming and Zoom for our breakouts. It’s helpful, of course, that it’s very easy to incorporate both YouTube and Zoom in a website, which became our plan.

Where to host the overall experience, was the next question. We wanted to be able to direct attendees to a single URL in order to join the event. Luckily, we had already built an accessible website to market the event. Dmitry had learned a lot in the course of designing and coding that site, including the importance of thinking about both blind and low-vision users. So we decided to add the event experience to our site itself – instead of using a third-party event platform – by adding two elements to the site navigation – Event (no longer live on the site) and Agenda.

The first amounted to a “page” (in WordPress parlance) that contained the YouTube live player embed, and beneath that text descriptions of the current session and the upcoming session, along with prominent links to the full Agenda. Some folks might ask, why place the agenda on a separate page? Doesn’t that make it more complicated? Good question, and the answer was one of many revelations that came from our partner Fable, which specializes in usability testing for people with disabilities. The answer, as we found time and again, was to imagine navigating with a screen reader, not your eyes. If the agenda were beneath the YouTube Player, it would create a cacophonous experience – imagine trying to listen to the programming and at the same time “read” (as in “listen to”) the agenda below. A separate page for the agenda was the right idea.

The Agenda page was our biggest challenge because it contained a lot of information, required filters and also, during the show, had different “states” – as in which agenda items were “playing now” versus upcoming versus already concluded. Dmitry learned a lot about the best approach to drop downs for filters and other details to make the agenda page navigable, and we reviewed it several times with Fable’s experts. We decided nonetheless to take the fairly unprecedented step of inviting our registered, blind event attendees to join us for a “practice event” a few days before the show in order to get more feedback. Nearly 200 people showed up for two sessions. We also invited blind screen reader experts, including Fable’s Sam Proulx and Facebook’s Matt King, to join us to answer questions and sort out the feedback.

It’s worth noting that there are three major screen readers: JAWS, which is used mostly by Windows’ users; VoiceOver, which is on all Apple products; and NVDA, which is open source and works on PCs running Microsoft Windows 7 SP1 and later. They don’t all work in the same way, and the people who use them range from experts who know hundreds of keyboard commands to occasional users who have more basic skills. For that reason, it’s really important to have expert interlocutors who can help separate good suggestions from simple frustrations.

The format for our open house (session one and session two) was a Zoom meeting, where we provided a briefing about the event and how the experience worked. Then we provided links to a working Event page (with a YouTube player active) and the Agenda page and asked people to give it a try and return to the Zoom session with feedback. Like so much else in this effort, the result was humbling. We had the basics down well, but we had missed some nuances, such as the best way to order information in an agenda item for someone who can only “hear” it versus “see” it. Fortunately, we had time to tune the agenda page a bit more before the show.

The practice session also reinforced that we had made a good move to offer live customer support during the show as a buffer for attendees who were less sophisticated in the use of screen readers. We partnered with Be My Eyes, a mobile app that connects blind users to sighted helpers who use the blind person’s phone camera to help troubleshoot issues. It’s like having a friend look over your shoulder. We recruited 10 volunteers and trained them to be ready to answer questions about the event, and Be My Eyes put them at the top of the list for any calls related to Sight Tech Global, which was listed under the Be My Eyes “event’ section.  Our event host, the incomparable Will Butler, who happens to be a vice-president at Be My Eyes, regularly reminded attendees to use Be My Eyes if they needed help with the virtual experience.

A month out from the event, we were feeling confident enough that we decided to add a social interaction feature to the show. Word on the street was that Slido’s basic Q&A features worked well with screen readers, and in fact Fable used the service for its projects. So we added Slido to the program. We did not embed a Slido widget beneath the YouTube player, which might have been a good solution for sighted participants, but instead added a link to each agenda session to a standalone Slido page, where attendees could add comments and ask questions without getting tangled in the agenda or the livestream.  The solution ended up working well, and we had more than 750 comments and questions on Slido during the show.

When Dec. 2 finally arrived, we were ready. But the best-laid plans often go awry, we were only minutes into the event when suddenly our live, closed-captioning broke. We decided to halt the show until we could bring that back up live, for the benefit of deaf and hard-of-hearing attendees. After much scrambling, captioning came back. (See more on captioning below).

Otherwise, the production worked well from a programming standpoint as well as accessibility. How did we do? Of the 2400+ registered attendees at the event, 45% said they planned to use screen readers. When we did a survey of those attendees immediately after the show, 95 replied and they gave the experience a 4.6/5 score. As far as the programming, our attendees (this time asked everyone – 157 replies) gave us a score of 4.7/5. Needless to say, we were delighted by those outcomes.

One other note concerned registration. At the outset, we also “heard” that one of the event registration platforms was “as good as it gets” for accessibility. We took that at face value, which was a mistake. We should have tested because comments for people trying to register as well as a low turnout of registration from blind people revealed after a few weeks that the registration site may have been better than the rest but was still really disappointing. It was painful, for example, to learn from one of our speakers that alt tags were missing from images (and there was no way to add them) and that the screen reader users had to tab through mountains of information in order to get to actionable links, such as “register.”

As we did with our approach to the website, we decided that the best course was to simplify. We added a Google Form as an alternative registration option. These are highly accessible. We instantly saw our registrations increase strongly, particularly among blind people. We were chagrined to realize that our first choice for registration had been excluding the very people our event intended to include.

We were able to use the Google Forms option because the event was free. Had we been trying to collect payment of registration fees, Google Form would not have been an option. Why did we make the event free to all attendees? There are several reasons. First, given our ambitions to make the event global and easily available to anyone interested in blindness, it was difficult to arrive at a universally acceptable price point. Second, adding payment as well as a “log-in” feature to access the event itself would create another accessibility headache. With our approach, anyone with the link to the Agenda or Event page could attend without any log-in demand or registration. We knew this would create some leakage in terms of knowing who attended the event – quite a lot in fact because we had 30% more attendees than registrants – but given the nature of the event we thought that losing out on names and emails was an acceptable price to pay considering the accessibility benefit.

If there is an overarching lesson from this exercise, it’s simply this: Event organizers have to roll up their sleeves and really get to the bottom of whether the experience is accessible or not. It’s not enough to trust platform or technology vendors, unless they have standout reputations in the community, as YouTube and Zoom do. It’s as important to ensure that the site or platform is coded appropriately (to WCAG standards, and using a tool like Google’s LightHouse) as it is to do real-world testing to ensure that the actual, observable experience of blind and low-vision users is a good one. At the end of the day, that’s what counts the most.

A final footnote. Although our event focused on accessibility issues for people who are blind or have low vision, we were committed from the start to include captions for people who would benefit. We opted for the best quality outcome, which is still human (versus AI) captioners, and we worked with VITAC to provide captions for the live Zoom and YouTube sessions and 3Play Media for the on-demand versions and the transcripts, which are now part of the permanent record. We also heard requests for “plain text” (no mark-up) versions of the transcripts in an easily downloadable version for people who use Braille-readers. We supplied those, as well. You can see how all those resources came together on pages  like this one, which contain all the information on a given session and are linked from the relevant section of the agenda.

 



How do you make a virtual event accessible for people who are blind or visually impaired?

When I started work on Sight Tech Global back in June this year, I was confident that we would find the answer to that question pretty quickly. With so many virtual event platforms and online ticketing options available to virtual event organizers, we were sure at least one would meet a reasonable standard of accessibility for people who use screen readers or other devices to navigate the Web.

Sadly, I was wrong about that. As I did my due diligence and spoke to CEOs at a variety of platforms, I heard a lot of “we’re studying WCAG [Web Content Accessibility Guidelines] requirements” or “our developers are going to re-write our front-end code when we have time.” In other words, these operations, like many others on the Web, had not taken the trouble to code their sites for accessibility at the start, which is the least costly and fairest approach, not to mention the one compliant with the ADA.

This realization was a major red flag. We had announced our event dates – Dec 2-3, 2020 – and there was no turning back. Dmitry Paperny, our designer, and I did not have much time to figure out a solution. No less important than the dates was the imperative that the event’s virtual experience work well for blind attendees, given that our event was really centered on that community.

We decided to take Occam’s razor to the conventions surrounding virtual event experiences and answer a key question: What was essential? Virtual event platforms tend to be feature heavy, which compounds accessibility problems. We ranked what really mattered, and the list came down to three things:

  • live-stream video for the “main stage” events
  • a highly navigable, interactive agenda
  • interactive video for the breakout sessions.

We also debated adding a social or networking element as well, and decided that was optional unless there was an easy, compelling solution.

The next question was what third-party tools could we use? The very good news was that YouTube and Zoom get great marks for accessibility. People who are blind are familiar with both and many know the keyboard commands to navigate the players. We discovered this largely by word of mouth at first and then discovered ample supporting documentation at YouTube and Zoom. So we chose YouTube for our main stage programming and Zoom for our breakouts. It’s helpful, of course, that it’s very easy to incorporate both YouTube and Zoom in a website, which became our plan.

Where to host the overall experience, was the next question. We wanted to be able to direct attendees to a single URL in order to join the event. Luckily, we had already built an accessible website to market the event. Dmitry had learned a lot in the course of designing and coding that site, including the importance of thinking about both blind and low-vision users. So we decided to add the event experience to our site itself – instead of using a third-party event platform – by adding two elements to the site navigation – Event (no longer live on the site) and Agenda.

The first amounted to a “page” (in WordPress parlance) that contained the YouTube live player embed, and beneath that text descriptions of the current session and the upcoming session, along with prominent links to the full Agenda. Some folks might ask, why place the agenda on a separate page? Doesn’t that make it more complicated? Good question, and the answer was one of many revelations that came from our partner Fable, which specializes in usability testing for people with disabilities. The answer, as we found time and again, was to imagine navigating with a screen reader, not your eyes. If the agenda were beneath the YouTube Player, it would create a cacophonous experience – imagine trying to listen to the programming and at the same time “read” (as in “listen to”) the agenda below. A separate page for the agenda was the right idea.

The Agenda page was our biggest challenge because it contained a lot of information, required filters and also, during the show, had different “states” – as in which agenda items were “playing now” versus upcoming versus already concluded. Dmitry learned a lot about the best approach to drop downs for filters and other details to make the agenda page navigable, and we reviewed it several times with Fable’s experts. We decided nonetheless to take the fairly unprecedented step of inviting our registered, blind event attendees to join us for a “practice event” a few days before the show in order to get more feedback. Nearly 200 people showed up for two sessions. We also invited blind screen reader experts, including Fable’s Sam Proulx and Facebook’s Matt King, to join us to answer questions and sort out the feedback.

It’s worth noting that there are three major screen readers: JAWS, which is used mostly by Windows’ users; VoiceOver, which is on all Apple products; and NVDA, which is open source and works on PCs running Microsoft Windows 7 SP1 and later. They don’t all work in the same way, and the people who use them range from experts who know hundreds of keyboard commands to occasional users who have more basic skills. For that reason, it’s really important to have expert interlocutors who can help separate good suggestions from simple frustrations.

The format for our open house (session one and session two) was a Zoom meeting, where we provided a briefing about the event and how the experience worked. Then we provided links to a working Event page (with a YouTube player active) and the Agenda page and asked people to give it a try and return to the Zoom session with feedback. Like so much else in this effort, the result was humbling. We had the basics down well, but we had missed some nuances, such as the best way to order information in an agenda item for someone who can only “hear” it versus “see” it. Fortunately, we had time to tune the agenda page a bit more before the show.

The practice session also reinforced that we had made a good move to offer live customer support during the show as a buffer for attendees who were less sophisticated in the use of screen readers. We partnered with Be My Eyes, a mobile app that connects blind users to sighted helpers who use the blind person’s phone camera to help troubleshoot issues. It’s like having a friend look over your shoulder. We recruited 10 volunteers and trained them to be ready to answer questions about the event, and Be My Eyes put them at the top of the list for any calls related to Sight Tech Global, which was listed under the Be My Eyes “event’ section.  Our event host, the incomparable Will Butler, who happens to be a vice-president at Be My Eyes, regularly reminded attendees to use Be My Eyes if they needed help with the virtual experience.

A month out from the event, we were feeling confident enough that we decided to add a social interaction feature to the show. Word on the street was that Slido’s basic Q&A features worked well with screen readers, and in fact Fable used the service for its projects. So we added Slido to the program. We did not embed a Slido widget beneath the YouTube player, which might have been a good solution for sighted participants, but instead added a link to each agenda session to a standalone Slido page, where attendees could add comments and ask questions without getting tangled in the agenda or the livestream.  The solution ended up working well, and we had more than 750 comments and questions on Slido during the show.

When Dec. 2 finally arrived, we were ready. But the best-laid plans often go awry, we were only minutes into the event when suddenly our live, closed-captioning broke. We decided to halt the show until we could bring that back up live, for the benefit of deaf and hard-of-hearing attendees. After much scrambling, captioning came back. (See more on captioning below).

Otherwise, the production worked well from a programming standpoint as well as accessibility. How did we do? Of the 2400+ registered attendees at the event, 45% said they planned to use screen readers. When we did a survey of those attendees immediately after the show, 95 replied and they gave the experience a 4.6/5 score. As far as the programming, our attendees (this time asked everyone – 157 replies) gave us a score of 4.7/5. Needless to say, we were delighted by those outcomes.

One other note concerned registration. At the outset, we also “heard” that one of the event registration platforms was “as good as it gets” for accessibility. We took that at face value, which was a mistake. We should have tested because comments for people trying to register as well as a low turnout of registration from blind people revealed after a few weeks that the registration site may have been better than the rest but was still really disappointing. It was painful, for example, to learn from one of our speakers that alt tags were missing from images (and there was no way to add them) and that the screen reader users had to tab through mountains of information in order to get to actionable links, such as “register.”

As we did with our approach to the website, we decided that the best course was to simplify. We added a Google Form as an alternative registration option. These are highly accessible. We instantly saw our registrations increase strongly, particularly among blind people. We were chagrined to realize that our first choice for registration had been excluding the very people our event intended to include.

We were able to use the Google Forms option because the event was free. Had we been trying to collect payment of registration fees, Google Form would not have been an option. Why did we make the event free to all attendees? There are several reasons. First, given our ambitions to make the event global and easily available to anyone interested in blindness, it was difficult to arrive at a universally acceptable price point. Second, adding payment as well as a “log-in” feature to access the event itself would create another accessibility headache. With our approach, anyone with the link to the Agenda or Event page could attend without any log-in demand or registration. We knew this would create some leakage in terms of knowing who attended the event – quite a lot in fact because we had 30% more attendees than registrants – but given the nature of the event we thought that losing out on names and emails was an acceptable price to pay considering the accessibility benefit.

If there is an overarching lesson from this exercise, it’s simply this: Event organizers have to roll up their sleeves and really get to the bottom of whether the experience is accessible or not. It’s not enough to trust platform or technology vendors, unless they have standout reputations in the community, as YouTube and Zoom do. It’s as important to ensure that the site or platform is coded appropriately (to WCAG standards, and using a tool like Google’s LightHouse) as it is to do real-world testing to ensure that the actual, observable experience of blind and low-vision users is a good one. At the end of the day, that’s what counts the most.

A final footnote. Although our event focused on accessibility issues for people who are blind or have low vision, we were committed from the start to include captions for people who would benefit. We opted for the best quality outcome, which is still human (versus AI) captioners, and we worked with VITAC to provide captions for the live Zoom and YouTube sessions and 3Play Media for the on-demand versions and the transcripts, which are now part of the permanent record. We also heard requests for “plain text” (no mark-up) versions of the transcripts in an easily downloadable version for people who use Braille-readers. We supplied those, as well. You can see how all those resources came together on pages  like this one, which contain all the information on a given session and are linked from the relevant section of the agenda.

 



Amazon just announced that it’s acquiring Wondery, the network behind podcasts including “Dirty John” and “Dr. Death.”

Wondery will become part of Amazon Music, which added support for podcasts (including its own original shows) in September. At the same time, the announcement claims that “nothing will change for listeners” and that the network’s podcasts will continue to be available from “a variety of providers.”

Media companies and streaming audio platforms are all making big bets on podcasting, with Spotify making a series of acquisitions including podcast network Gimlet, SiriusXM acquiring Stitcher and The New York Times acquiring Serial Productions. Amazon is coming relatively late to this market, but it will now have the support of a popular podcast maker as it works to catch up.

“With Amazon Music, Wondery will be able to provide even more high-quality, innovative content and continue their mission of bringing a world of entertainment and knowledge to their audiences, wherever they listen,” Amazon wrote.

Financial terms were not disclosed. The Wall Street Journal previously reported that acquisition talks were in the works, and that those talks valued Wondery at around $300 million.

The startup was founded in 2016 by former Fox executive Hernan Lopez (who’s currently fighting federal corruption charges tied to his time at Fox). Numbers from Podtrac rank it as the fourth largest podcast publisher in November, with an audience in the U.S. of more than 9 million unique listeners.

Wondery has raised a total of $15 million in funding from Advancit Capital, BDMI, Greycroft, Lerer Hippeau and others, according to Crunchbase.

 



Here’s another edition of “Dear Sophie,” the advice column that answers immigration-related questions about working at technology companies.

“Your questions are vital to the spread of knowledge that allows people all over the world to rise above borders and pursue their dreams,” says Sophie Alcorn, a Silicon Valley immigration attorney. “Whether you’re in people ops, a founder or seeking a job in Silicon Valley, I would love to answer your questions in my next column.”

Extra Crunch members receive access to weekly “Dear Sophie” columns; use promo code ALCORN to purchase a one- or two-year subscription for 50% off.


Dear Sophie:

I’m working in the Bay Area on an H-1B visa and my employer won’t sponsor my green card.

I really want permanent residence, but I never won a Nobel prize; I’m single; and I don’t have a million dollars yet. However, I think I might qualify for an EB-2 NIW green card.

What can you share?

— National in Napa

Dear National:

Wonderful that you’re taking matters into your own hands! This is a complicated process, so the most important advice I can give you is to retain an experienced business immigration attorney to represent you and prepare and file your green card case.

For additional do’s and don’ts in U.S. immigration, please check out the recent podcast that my law firm partner, Anita Koumriqian, and I posted on the commandments of immigration (and especially what to not do when it comes to visas and green cards).

This particular episode focuses on family-based green cards, but these recommendations are timeless and apply to individuals who are self-petitioning for employment-based green cards, such as the EB-2 NIW (National Interest Waiver) for exceptional ability and the EB-1A for extraordinary ability. Our top recommendation in that podcast episode is to avoid DIY immigration, so definitely retain legal counsel!

Filing for an EB-2 NIW or any green card requires more than just filling out the appropriate forms. The process needs to be understood, as the law and legal requirements, and the analysis of whether and how you can best qualify is complicated.

With any immigration matter, one needs to have the resources to fully understand the process, the steps for applying, and the timing and deadlines. We want to always make sure that you always maintain legal status (never falling out of status) so that you can remain in the U.S. (and don’t have to leave).



This has been the year of the social organization. As the COVID-19 pandemic swept across the world and the United States, governments and a patchwork of nonprofits and volunteer organizations sprang into action, offering everything from food and medical supplies to children’s books and clothing to individuals and families struggling in the virus’s wake.

Perhaps the biggest divide though to getting people help has been digital — non-profits need to connect with their beneficiaries over the internet just as much as any retailer today. Unfortunately, tech talent is expensive and hard to find, particularly for often cash-strapped nonprofits.

That was part of the impetus for two Stanford seniors, Mary Zhu and Amay Aggarwal, to co-found Develop for Good, a matching service designed to connect motivated and ambitious undergrads in computer science, design and economics to nonprofits with specific projects that require expertise. They launched the network in March as the pandemic started spreading rapidly, and since then, the organization has itself started growing exponentially as well.

Develop for Good “was in response to [the pandemic], but at the same time, a lot of our peers were having their internships canceled, [and] a lot of companies were having hiring freezes,” Zhu explained. “People were also seeking opportunities to be able to develop their professional skills and develop their project experience.” This coincidence of needs among both students and nonprofits helped accelerate the matching that Develop for Good offers.

So far, the 501(c)(3) non-profit has coordinated more than 25,000 volunteer hours across groups like the Ronald McDonald House, UNICEF, the Native American Rights Fund (NARF), Easterseals, The Nature Conservancy, Save the Children, AARP and more. The program, which in its first batch focused on Zhu and Aggarwal’s network at Stanford, has since expanded to more than a dozen schools across the United States. The two first reached out to nonprofits through Stanford’s alumni network, although as the program’s reputation has grown, they have started getting inbound interest as well.

Volunteers take on a project for 5-10 hours per week for 10 weeks, typically in teams. Each team meets their nonprofit client at least weekly to ensure the project matches expectations. Typical projects include application development, data visualization, and web design. Most projects conclude at the end of the batch, although the founders note that some in-depth projects like product development can cross over into future batches. As the program has expanded, Zhu and Aggarwal have added a more formal mentorship component to the program to help guide students through their work.

Applications for the next batch starting in January are currently open for students (they’re due January 2nd, so get them in quick!). The founders told me that they are expecting 800 applications, and are likely going to be able to match about 200 volunteers to 32 projects. Applications are mostly about matching interests with potential programs for the best fit, rather than a purely competitive exercise. So far, the program has worked on 50 projects to date.

For this next batch, Amazon Web Services will sponsor a stipend for first-generation and low-income students to help defray the financial impact of volunteer work for some students. “Over the past cycle, a few people had to drop out because they said, ‘they’re unable to work for free because they’re having a lot of financial stress for their families’,” Aggarwal said. The new stipend is meant to help these students continue to volunteer while alleviating some of that financial burden.

Aggarwal said that two-thirds of the program’s volunteer developers and designers are female, and one-third are first-generation or low-income.



Tech got dragged into yet another irrelevant Congressional scuffle this week after President Trump agreed to sign a bipartisan pandemic relief package but continued to press for additional $2,000 checks that his party opposed during negotiations.

In tweets and other comments, Trump tied a push for the boosted relief payments to his entirely unrelated demand to repeal Section 230 of the Communications Decency Act, a critical but previously obscure law that protects internet companies from legal liability for user-generated content.

The political situation was complicated further after Republicans in Georgia’s two extremely high stakes runoff races sided with Trump over the additional checks rather than the majority of Republicans in Congress.

In a move that’s more a political maneuver than a real stab at tech regulation,  Senate Majority Leader Mitch McConnell introduced a new bill late Tuesday linking the $2,000 payments Republicans previously blocked to an outright repeal of Section 230 — a proposal that’s sure to be doomed in Congress.

McConnell’s bill humors the president’s eclectic cluster of demands while creating an opportunity for his party to look unified, sort of, in the face of the Georgia situation. The proposal also tosses in a study on voter fraud, not because it’s relevant but because it’s another pet issue that Trump dragged into the whole mess.

Over the course of 2020, Trump has repeatedly returned to the ideal of revoking Section 230 protections as a cudgel he can wield against tech companies, particularly Twitter when the platform’s rules result in his own tweets being downranked or paired with misinformation warnings.

 

If the latest development sounds confusing that’s because it is. Section 230 and the stimulus legislation have nothing at all to do with one another. And we were just talking about Section 230 in relation to another completely unrelated bit of legislation, a huge annual defense spending bill called the NDAA.

Last week Trump decided to veto that bill, which enjoyed broad bipartisan support because it funds the military and does other mostly uncontroversial stuff, on the grounds that it didn’t include his totally unrelated demand to strip tech companies of their Section 230 protections. Trump’s move was pretty much out of left field, but it opened the door for Democrats to leverage their cooperation in a two-thirds majority to override Trump’s veto for other stuff they want right now, namely those $2,000 stimulus checks for Americans. Sen. Bernie Sanders is attempting to do just that.

Unfortunately, McConnell’s move here is mostly a cynical one, to the detriment of Americans in financial turmoil. An outright repeal of Section 230 is a position without much if any support among Democrats. And while closely Trump-aligned Republicans have flirted with the idea of stripping online platforms of the legal shield altogether, some flavor of reform is what’s been on the table and what’s likely to get hashed out in 2021.

For lawmakers who understand the far-reaching implications of the law, reform rather than a straight up repeal was always a more likely outcome. In the extraordinarily unlikely event that Section 230 gets repealed through this week’s strange series of events, many of the websites, apps and online services that people rely on would be thrown into chaos. Without Section 230’s liability protections, websites from Yelp to Fox News would be legally responsible for any user-generated reviews and comments they host. If an end to comments sections doesn’t sound so bad, imagine an internet without Amazon reviews, tweets and many other byproducts of the social internet.

The thing is, it’s not going to happen. McConnell doesn’t want Americans to receive the additional $2,000 checks and Democrats aren’t going to be willing to secure the funds by agreeing to a totally unrelated last-minute proposal to throw out the rules of the internet, particularly with regulatory pressure on tech mounting and more serious 230 reform efforts still underway. The proposed bill is also not even guaranteed to come up for a vote in the waning days of this Congressional session.

The end result will be that McConnell humors the president by offering him what he wanted, kind of, Democrats look bad for suddenly opposing much-needed additional stimulus money and Americans in the midst of a deadly and financially devastating crisis probably don’t end up with more money in their pockets. Not great.



Marketers don’t grow up daydreaming about risk management and compliance. Personally, I never gave governance, risk or compliance (GRC) a second thought outside of making sure my team completed required compliance or phishing training from time to time.

So, when I was tasked with leading the General Data Protection Regulation (GDPR) compliance initiative at a previous employer, I was far from my comfort zone.

What I thought were going to be a few, small requirements regarding how and when we sent emails to contacts based in Europe quickly turned into a complete overhaul of how the organization collected, processed and protected personally identifiable information (PII).

It is a risk leader’s job to facilitate conversations around risk and help guide business unit leaders to finding their own risk appetites.

As it turned out, I had completely underestimated the scope and importance of the project. My first mistake? Assuming compliance was “someone else’s issue.”

Risk management is a team sport

No single risk leader can alone assess, manage and resolve an organization’s risk cap. Without active involvement from business unit leaders across the company in marketing, human resources, sales and more, a company can never have a healthy risk-aware culture.

Leaders successful at developing that culture instill a company-wide team mentality with well-defined objectives, a clear scope and an agreed-upon allocation of responsibility. Ultimately, you need buy-in similar to the way a football coach needs players to buy into the team’s culture and plays for peak performance. While the company’s risk managers may be the quarterbacks when it comes to GRC, the team won’t win without key plays by linemen (sales), running backs (marketing) and receivers (procurement).

It is a risk leader’s job to facilitate conversations around risk and help guide business unit leaders to finding their own risk appetites. It’s not their job to define acceptable levels of risk for us, which is why CMOs, HR and sales leaders have no choice but to take an active role in defining risk for their departments.

Shifting my view on risk management

If I am being honest, I only used to think about risk management in terms of asset protection and cost reduction. My crash course in risk responsibility opened my eyes to the many ways GRC can actually speed deals and furthermore, drive revenue.



Spyware maker NSO Group used real phone location data on thousands of unsuspecting people when it demonstrated its new COVID-19 contact-tracing system to governments and journalists, researchers have concluded.

NSO, a private intelligence company best known for developing and selling governments access to its Pegasus spyware, went on the charm offensive earlier this year to pitch its contact-tracing system, dubbed Fleming, aimed at helping governments track the spread of COVID-19. Fleming is designed to allow governments to feed location data from cell phone companies to visualize and track the spread of the virus. NSO gave several news outlets each a demo of Fleming, which NSO says helps governments make public health decisions “without compromising individual privacy.”

But in May, a security researcher told TechCrunch that he found an exposed database storing thousands of location data points used by NSO to demonstrate how Fleming works — the same demo seen by reporters weeks earlier.

TechCrunch reported the apparent security lapse to NSO, which quickly secured the database, but said that the location data was “not based on real and genuine data.”

NSO’s claim that the location data wasn’t real differed from reports in Israeli media, which said NSO had used phone location data obtained from advertising platforms, known as data brokers, to “train” the system. Academic and privacy expert Tehilla Shwartz Altshuler, who was also given a demo of Fleming, said NSO told her that the data was obtained from data brokers, which sell access to vast troves of aggregate location data collected from the apps installed on millions of phones.

TechCrunch asked researchers at Forensic Architecture, an academic unit at Goldsmiths, University of London that studies and examines human rights abuses, to investigate. The researchers published their findings on Wednesday, concluding that the exposed data was likely based on real phone location data.

The researchers said if the data is real, then NSO “violated the privacy” of 32,000 individuals across Rwanda, Israel, Bahrain, Saudi Arabia and the United Arab Emirates — countries that are reportedly customers of NSO’s spyware.

The researchers analyzed a sample of the exposed phone location data by looking for patterns they expected to see with real people’s location data, such as a concentration of people in major cities and by measuring the time it took for individuals to travel from one place to another. The researchers also found spatial irregularities that would be associated with real data, such as star-like patterns that are caused by a phone trying to accurately pinpoint its location when the line of sight to the satellite is obstructed by tall buildings.

“The spatial ‘irregularities’ in our sample — a common signature of real mobile location tracks — further support our assessment that this is real data. Therefore, the dataset is most likely not ‘dummy’ nor computer generated data, but rather reflects the movement of actual individuals, possibly acquired from telecommunications carriers or a third-party source,” the researchers said.

The researchers built maps, graphs, and visualizations to explain their findings, while preserving the anonymity of the individuals whose location data was fed into NSO’s Fleming demo.

Gary Miller, a mobile network security expert and founder of cyber intelligence firm Exigent Media, reviewed some of the datasets and graphs, and concluded it was real phone location data.

Miller said the number of data points increased around population hubs. “If you take a scatter plot of cell phone locations at a given point in time, there will be consistency in the number of points in suburban versus urban locations,” he said. Miller also found evidence of people traveling together, which he said “looked consistent with real phone data.”

He also said that even “anonymized” location data sets can be used to tell a lot about a person, such as where they live and work, and who they visit. “One can learn a lot of details about individuals simply by looking at location movement patterns,” he said.

“If you add up all of the similarities it would be very difficult to conclude that this was not actual mobile network data,” he said.

A timeline of one person’s location data in Bahrain over a three-week period. Researchers say these red lines represent travel that seems plausible within the indicated time. (Image: Forensic Architecture/supplied)

John Scott-Railton, a senior researcher at Citizen Lab, said the data likely originated from phone apps that use a blend of direct GPS data, nearby Wi-Fi networks, and the phone’s in-built sensors to try to improve the quality of the location data. “But it’s never really perfect,” he said. “If you’re looking at advertising data — like the kind that you buy from a data broker — it would look a lot like this.”

Scott-Railton also said that using simulated data for a contact-tracing system would be “counterproductive,” as NSO would “want to train [Fleming] on data that is as real and representative as possible.”

“Based on what I saw, the analysis provided by Forensic Architecture is consistent with the previous statements by Tehilla Shwartz Altshuler,” said Scott-Railton, referring to the academic who said NSO told her that was based on real data.

“The whole situation paints a picture of a spyware company once more being cavalier with sensitive and potentially personal information,” he said.

NSO rejected the researchers’ findings.

“We have not seen the supposed examination and have to question how these conclusions were reached. Nevertheless, we stand by our previous response of May 6, 2020. The demo material was not based on real and genuine data related to infected COVID-19 individuals,” said an unnamed spokesperson. (NSO’s earlier statement made no reference to individuals with COVID-19.)

“As our last statement details, the data used for the demonstrations did not contain any personally identifiable information (PII). And, also as previously stated, this demo was a simulation based on obfuscated data. The Fleming system is a tool that analyzes data provided by end users to help healthcare decision-makers during this global pandemic. NSO does not collect any data for the system, nor does NSO have any access to collected data.”

NSO did not answer our specific questions, including where the data came from and how it was obtained. The company claims on its website that Fleming is “already being operated by countries around the world,” but declined to confirm or deny its government customers when asked.

Contact Us

Got a tip? Contact us securely using SecureDrop. Find out more here.

The Israeli spyware maker’s push into contact tracing has been seen as a way to repair its image, as the company battles a lawsuit in the United States that could see it reveal more about the governments that buy access to its Pegasus spyware.

NSO is currently embroiled in a lawsuit with Facebook-owned WhatsApp, which last year blamed NSO for exploiting an undisclosed vulnerability in WhatsApp to infect some 1,400 phones with Pegasus, including journalists and human rights defenders. NSO says it should be afforded legal immunity because it acts on behalf of governments. But Microsoft, Google, Cisco, and VMware filed an amicus brief this week in support of WhatsApp, and calling on the court to reject NSO’s claim to immunity.

The amicus brief came shortly after Citizen Lab found evidence that dozens of journalists were also targeted with Pegasus spyware by NSO customers, including Saudi Arabia and the United Arab Emirates. NSO disputed the findings.



MKRdezign

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget