December 2020

Scale AI CEO Alexandr Wang doesn’t need a crystal ball to see where artificial intelligence will be used in the future. He just looks at his customer list.

The four-year-old startup, which recently hit a valuation of more than $3.5 billion, got its start supplying autonomous vehicle companies with the labeled data needed to train machine learning models to develop and eventually commercialize robotaxis, self-driving trucks and automated bots used in warehouses and on-demand delivery.

The wider adoption of AI across industries has been a bit of a slow burn over the past several years as company founders and executives begin to understand what the technology could do for their businesses.

In 2020, that changed as e-commerce, enterprise automation, government, insurance, real estate and robotics companies turned to Scale’s visual data labeling platform to develop and apply artificial intelligence to their respective businesses. Now, the company is preparing for the customer list to grow and become more varied.

How 2020 shaped up for AI

Scale AI’s customer list has included an array of autonomous vehicle companies including Alphabet, Voyage, nuTonomy, Embark, Nuro and Zoox. While it began to diversify with additions like Airbnb, DoorDash and Pinterest, there were still sectors that had yet to jump on board. That changed in 2020, Wang said.

Scale began to see incredible use cases of AI within the government as well as enterprise automation, according to Wang. Scale AI began working more closely with government agencies this year and added enterprise automation customers like States Title, a residential real estate company.

Wang also saw an increase in uses around conversational AI, in both consumer and enterprise applications as well as growth in e-commerce as companies sought out ways to use AI to provide personalized recommendations for its customers that were on par with Amazon.

Robotics continued to expand as well in 2020, although it spread to use cases beyond robotaxis, autonomous delivery and self-driving trucks, Wang said.

“A lot of the innovations that have happened within the self-driving industry, we’re starting to see trickle out throughout a lot of other robotics problems,” Wang said. “And so it’s been super exciting to see the breadth of AI continue to broaden and serve our ability to support all these use cases.”

The wider adoption of AI across industries has been a bit of a slow burn over the past several years as company founders and executives begin to understand what the technology could do for their businesses, Wang said, adding that advancements in natural language processing of text, improved offerings from cloud companies like AWS, Azure and Google Cloud and greater access to datasets helped sustain this trend.

“We’re finally getting to the point where we can help with computational AI, which has been this thing that’s been pitched for forever,” he said.

That slow burn heated up with the COVID-19 pandemic, said Wang, noting that interest has been particularly strong within government and enterprise automation as these entities looked for ways to operate more efficiently.

“There was this big reckoning,” Wang said of 2020 and the effect that COVID-19 had on traditional business enterprises.

If the future is mostly remote with consumers buying online instead of in-person, companies started to ask, “How do we start building for that?,” according to Wang.

The push for operational efficiency coupled with the capabilities of the technology is only going to accelerate the use of AI for automating processes like mortgage applications or customer loans at banks, Wang said, who noted that outside of the tech world there are industries that still rely on a lot of paper and manual processes.



When Salesforce acquired Quip in 2016 for $750 million, it gained CEO and co-founder Bret Taylor as part of the deal. Taylor has since risen quickly through the ranks of the software giant to become president and COO, second in command behind CEO Marc Benioff. Taylor’s experience shows that startup founders can sometimes play a key role in the companies that acquire them.

Benioff, 56, has been running Salesforce since its founding more than 20 years ago. While he hasn’t given any public hints that he intends to leave anytime soon, if he wanted to step back from the day-to-day running of the company or even job share the role, he has a deep bench of executive talent including many experienced CEOs, who like Taylor came to the company via acquisition.

One way to step back from the enormous responsibility of running Salesforce would be by sharing the role.

He and his wife Lynne have been active in charitable giving and in 2016 signed The Giving Pledge, an initiative from the The Bill and Melinda Gates Foundation, to give a majority of their wealth to philanthropy. One could see him wanting to put more time into pursuing these charitable endeavors just as Gates did 20 years ago. As a means of comparison, Gates founded Microsoft in 1975 and stayed for 25 years until he left in 2000 to run his charitable foundation full time.

Even if this remains purely speculative for the moment, there is a group of people behind him with deep industry experience, who could be well-suited to take over should the time ever come.

Resurrecting the co-CEO role

One way to step back from the enormous responsibility of running Salesforce would be by sharing the role. In fact, for more than a year starting in 2018, Benioff actually shared the top job with Keith Block until his departure last year. When they worked together, the arrangement seemed to work out just fine with Block dealing with many larger customers and helping the software giant reach its $20 billion revenue goal.

Before Block became co-CEO, he had a myriad other high-level titles including co-chairman, president and COO — two of which, by the way, Taylor has today. That was a lot of responsibility for one person inside a company the size of Salesforce, but promoting him to co-CEO from COO gave the company a way to reward his hard work and help keep him from jumping ship (he eventually did anyway).

As Holger Mueller, an analyst at Constellation Research points out, the co-CEO concept has worked out well at major enterprise companies that have tried it in the past, and it helped with continuity. “Salesforce, SAP and Oracle all didn’t miss a beat really with the co-CEO departures,” he said.

If Benioff wanted to go back to the shared responsibility model and take some work off his plate, making Taylor (or someone else) co-CEO would be one way to achieve that. Certainly, Brent Leary, lead analyst at CRM Essentials sees Taylor gaining increasing responsibility as time goes along, giving credence to the idea.

“Ever since Quip was acquired Taylor seemed to be on the fast track, becoming president and chief product officer less than a year-and-a-half after the acquisition, and then two years later being promoted to chief operating officer,” Leary said.

Who else could be in line?

While Taylor isn’t the only person who could step into Benioff’s shoes, he looks like he has the best shot at the moment, especially in light of the $27.7 billion Slack deal he helped deliver earlier this month.

“Taylor being publicly praised by Benioff for playing a significant role in the Slack acquisition, Salesforce’s largest acquisition to date, shows how much he has solidified his place at the highest levels of influence and decision-making in the organization,” Leary pointed out.

But Mueller posits that his rapid promotions could also show something might be lacking with internal options, especially around product. “Taylor is a great, smart guy, but his rise shows more the product organization bench depth challenges that Salesforce has,” he said.



What could go wrong?

Hello and welcome back to Equity, TechCrunch’s venture-capital-focused podcast (now on Twitter!), where we unpack the numbers behind the headlines. As you can see, this is our yearly predictions episode. Our behind-the-scenes guru Chris Gates joins us on the mic, we take shots at our prior prognostications, and nosh on what we feel is positively persaged.

As always, this episode is in good fun. If you don’t agree with we think is up ahead, that’s fine. You’re probably right. But we’re nothing if not up for a challenge, so we kept the tradition alive this year.

This is the last Equity episode of 2020. And while we can’t tell you yet what our plans are for 2021, we can say — nay, project — that there are a lot of fun and big things coming for Equity. We’re planning our busiest year ever, by far.

And with that, we’re out of here. Thanks for several million downloads this year, our biggest annum to date.

Equity drops every Monday at 7:00 a.m. PST and Thursday afternoon as fast as we can get it out, so subscribe to us on Apple PodcastsOvercastSpotify and all the casts.



Samsung Electronics vice chairman Jay Y. Lee faces a nine-year prison term in the bribery case that contributed to the downfall of former president Park Guen-hye. Prosecutors argued that the length of the sentence is warranted because of Samsung’s power as the largest chaebol, or family-owned conglomerate, in South Korea.

“Samsung is a group with such overwhelming power that it is said Korean companies are divided into Samsung and non-Samsung,” they said during a final hearing on Wednesday, reports the Korea Herald. The final ruling is scheduled for January 18.

The bribery case is separate from another trial Lee is involved in, over alleged accounting fraud and stock-price manipulation. Hearings in that case began in October.

The bribery case dates back to 2017, when Lee was convicted of bribing Park and her close associate Choi Soon-sil and sentenced to five years in prison. Prosecutors allege the bribes were meant to secure government backing for Lee’s attempt to inherit control of Samsung from his father Lee Kun-hee, then its chairman. The illegal payments were a major part of the corruption scandal that led to Park’s impeachment, arrest and 25-year prison sentence.

Lee was freed in 2018 after the sentence was reduced and suspended on appeal, and returned to work as Samsung’s de facto head, a position he took after his father had a heart attack in 2014.

In August 2019, however, the Supreme Court overturned the appeals court, ruling that it was too lenient, and ordered that the case be retried in Seoul High Court.

The elder Lee, who was reportedly South Korea’s wealthiest citizen, died in October. He was worth an estimated $20.7 billion and under the country’s tax system, and his heirs could be liable for estate taxes of about $10 billion, reported Fortune.

TechCrunch has contacted Samsung for comment.



Amazon makes a big podcast acquisition, a Chinese robot maker raises $100 million and we review a robotic cat pillow. This is your Daily Crunch for December 30, 2020.

The big story: Amazon acquires Wondery

Amazon is the latest company to make a big acquisition in the podcast market — it’s buying Wondery, the podcast network behind shows like “Dirty John” and “Doctor Death.”

Although Wondery is becoming part of Amazon Music (which added podcast support in September), the company also says that “nothing will change for listeners” and that Wondery’s podcasts will continue to be available from “a variety of providers.”

The financial terms of the deal were not disclosed.

Startups, funding and venture capital

China’s adaptive robot maker Flexiv raises over $100M — Wang Shiquan, an alumnus of Stanford’s Biomimetics and Dexterous Manipulation Lab, founded Flexiv with a focus on building adaptive robots for the manufacturing industry.

Biteable raises a $7M Series A for its template-based online video builder — The product is designed for creating video assets that have more staying power than temporary social videos.

An earnest review of a robotic cat pillow — It’s cute!

Advice and analysis from Extra Crunch

On the diversity front, 2020 may prove a tipping point — VCs have talked about diversity for eons without doing much about it.

2020 will change the way we look at robotics — From logistics to food prep, robots are custom-built to help mankind survive a pandemic.

Dear Sophie: Tips for getting a National Interest green card by myself? — The latest edition of Dear Sophie, attorney Sophie Alcorn’s advice column answering immigration-related questions about working at technology companies.

(Extra Crunch is our membership program, which aims to democratize information about startups. You can sign up here for a holiday deal good through January 3. Read more about the deal here.)

Everything else

Section 230 is threatened in new bill tying liability shield repeal to $2,000 checks — The move seems more like a political maneuver than a real stab at tech regulation.

NSO used real people’s location data to pitch its contact-tracing tech, researchers say — Researchers say NSO’s use of real data “violated the privacy” of thousands of unwitting people.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.



You don’t need Qoobo in your life. Nobody needs Qoobo, exactly. In fact, first reactions tend to range from befuddlement to bemusement. The robotic cat pillow doesn’t make a ton of sense on the face of it – in part because Qoobo has no face.

The handful of time I’ve interacted with the original Qoobo in person, reactions have been pretty uniform. The initial confusion gives way to the question of why such a thing needs to exist. And then, inevitably, someone ask how they can buy one of their own.

The original, larger version was fairly difficult to get here in the States for a while, owing to the limitation of a small robotics company has in bringing its product to a brand new market. I suspect there was also a question of whether such an idiosyncratic product would translate. In the end, however, there’s nothing particularly confusing about it.

Image Credits: Brian Heater

At its subtly beating heart is an attempt to deliver comfort in a small, furry package. It’s something we could all probably use more of these days. Following a successful Indiegogo campaign, the new Petit Qoobo delivers that in a smaller, more affordable design. “Petit Qoobo is a cushion-shaped robot with a tail,” the included User Guide begins. “When stroked, the tail waves gently.”

Honestly, that’s kind of the whole deal here. It’s a furry pillow with a robotic tail that waves when pet. Pet it more vigorously and the tail responds in kind. The pillow has a built in mic that listens for sound (though not specific words), which can elicit a wag. I’ve found that things like a knock on the door or loud music can also trigger this effect. It will also just wag at random “just to say ‘hello’.”

Petit Qoobo is sitting on my lap as I write this. And yes, it’s soothing. It’s not a replacement for a real pet – but I also know full well that my real pet (pictured above) would not be as chill about sitting on my lap while I try to get some work done. When I’m finished petting Qoobo, there’s no protest – the tail simply goes slack.

The robot will also “go to sleep” after extensive petting – in order to save on charge, one assumes. When time comes to recharge, there’s a port located – let’s just say it’s near the tail. A zipper along the outside makes it possible to remove the fur coat altogether for cleaning.

Image Credits: Brian Heater

The tail mechanism isn’t loud, per se, but it’s audible. You can hear the actuators moving as it goes to work. Honestly, the buzzing is more charming than anything. The only time it’s an issue is when using the device as a pillow. Qoobo’s other clever trick is a quiet heartbeat that triggers when squeezed. It’s a nice, calming effect – though one that can sometimes be overpowered by the tail noise

The device is part of a long and fascinating lineage of Japanese therapy robotics. The most notable example is probably Paro, which dates back to the 90s. The baby seal was designed to calm and comfort patients in hospitals and nursing rooms – essentially a way to bring the benefits of therapy animals without having to have actual animals involved. Of course, that project – which ultimately cost around $15 million in development – is on an entirely different scale than this product from Yukai Engineering.

Image Credits: Brian Heater

But the result isn’t entirely dissimilar. There are just certain parts of us that are wired to want pet something furry and hear a heartbeat – both boxes this strange little robot happily checks. I certainly feel a bit calmer writing this — and that’s probably the most you can ask for, these days.



Earlier this month, Hyundai acquired a controlling stake in Boston Dynamics that valued the company at $1.1 billion. What’s most interesting about the news isn’t the acquisition itself (it does, after all, find Boston Dynamics switching hands for the third time in seven years), but rather what the company’s evolution tells us about the state of robotics in 2020.

When the Waltham, Massachusetts-based startup was acquired by Google in 2013, it was still a carefully cultivated mystery. That the internet’s response to the company was largely one of curiosity shaded with discomfort should come as little surprise. Boston Dynamics’ primary output from a public relations perspective was viral videos of impressive but imposing quadrupedal robots built with the aid of defense department contracts. It doesn’t take a giant leap to begin coloring in the gaps with dystopian sentiment.

In instances where robotic deployment has been successful, the technology has helped ease the burden on an impacted workforce.

Some of that has continued to follow the company, of course. Even in the age of short attention spans, one doesn’t quickly forget an image of a man in a fleece vest unsuccessfully attempting to kick over a headless buzzing robot in an empty parking lot. Heck, to this day every post I do about the company is greeted with multiple gifs of the knife-wielding robot from the “Metalhead” episode of “Black Mirror.”

While the company is still committed to its more bleeding edge R&D concepts, Hyundai didn’t purchase a strange little MIT-spinoff that makes viral internet videos. It purchased a company actively working to monetize those efforts. As CEO Robert Playter told me in a recent interview, the company has sold 400 Spots since opening initial availability around 15 months ago. It’s not a huge number, but it’s a sign that interest in the company’s products extend well beyond novelty.

Spot’s primary task at the moment involves surveying dangerous workplaces, from nuclear reactors to oil rigs. Boston Dynamics’ next product, Handle, will move boxes around a warehouse. That robot is set to go on sale at some point in 2022. “I think something like a robot every couple of years is a pace that we could manage,” Playter told me. “From clean sheet, we can build a new robot in under a year. “And then you have to go through an iterative process of refining that concept and starting to understand market fit.”

Maturity in this industry requires a level of pragmatism. Tasked with describing the state of robotics in 2020, I would probably say it’s something like, “Cool technology employed for uncool tasks.” You can, no doubt, identify exceptions (making special effects for movies like Bot & Dolly is decidedly cool), but on the whole, Boston Dynamics is a perfect example of impressive robots doing boring stuff. Any roboticist will happily hammer into you the concept of the three Ds — the dull, dirty and dangerous jobs where the technology is most likely to be deployed.



SpaceX will try a significantly different approach to landing its future reusable rocket boosters, according to CEO and founder Elon Musk. It will attempt to ‘catch’ the heavy booster, which is currently in development, using the launch tower arm used to stabilize the vehicle during its pre-takeoff preparations. Current Falcon 9 boosters return to Earth and land propulsively on their own built-in legs – but the goal with Super Heavy is for the larger rocket not to have legs at all, says Musk.

The Super Heavy launch process will still involve use of its engines to control the velocity of its descent, but it will involve using the grid fins that are included on its main body to help control its orientation during flight to ‘catch’ the booster – essentially hooking it using the launch tower arm before it touches the ground at all. The main benefits of this method, which will obviously involve a lot of precision maneuvering, is that it means SpaceX can save both cost and weight by omitting landing legs from the Super Heavy design altogether.

Another potential benefit raised by Musk is that it could allow SpaceX to essentially recycle the Super Heavy booster immediately back on the launch mount it returns to – possibly enabling it to be ready to fly again with a new payload and upper stage (consisting of Starship, the other spacecraft SpaceX is currently developing and testing) in “under an hour.”

The goal for Starship and Super Heavy is to create a launch vehicle that’s even more reusable than SpaceX’s current Falcon 9 (and Falcon Heavy) system. Eventually, the goal for Musk is to have Starship making regular and frequent flights – for point-to-point flight on Earth, for orbital missions closer to home, and for long-distance runs to the Moon and Mars. The pace at which he envisions these happening in order to make it possible to colonize Mars with a continuous human population requires the kind of rapid recycling and reflying of Super Heavy he described today with this proposed new landing method.

Starship prototypes are currently being constructed and tested in Boca Chica, Texas, where SpaceX has been flying the pre-production spaceship during the past year. The company is also working on elements of the Super Heavy booster, and Musk said recently that it intends to begin actively flight-testing that component of the launch system in a few months’ time.



The Qatar Investment Authority is investing $125 million into energy storage systems integrator and power management tech developer, Fluence, in a deal that will value the company at over $1 billion.

The joint venture between the American independent power producer, AES Corp. and the German industrial conglomerate Siemens, was already worth $900 million prior to the transaction, according to Marek Wolek, the vice president of strategy and partnerships at Fluence.

With the new cash, Fluence will look to develop and acquire software and services that can expand the company’s offerings to its core clients among utilities and independent power project developers, Wolek said.

And it might not be too long before the company seeks additional liquidity from the public markets, Wolek said. He noted that the QIA is already backing the battery company QuantumScape, which was acquired by a special purpose acquisition company in late November and whose shares have been on a meteoric rise ever since.

After the QIA investment, AES and Siemens will remain majority shareholders. Each will hold a 44 percent stake in the company after the investment.

“We believe the global problem of climate change can only be tackled by  leveraging the combined capabilities of technologists and investors from around  the world,” said Manuel Perez Dubuc, Fluence’s Chief Executive Officer, in a statement. “We see  energy storage as the linchpin of a decarbonized grid and adding QIA to our  international shareholder base will allow Fluence to innovate even faster and  address the enormous global market for large-scale battery-based energy  storage.” 

One of six founding members of the One Planet Sovereign Wealth Fund Initiative, QIA is a multi-billion dollar investment vehicle that has significant stores of capital to continue its support of climate tech companies like Fluence.

Fluence has already deployed roughly 5 gigawatts of energy storage and management systems to a wide array of customers, according to Wolek.

And while Wolek said Fluence sees itself and its energy storage business as a key component of the global decarbonization which needs to occur to combat climate change, electric storage isn’t the only technology that’s needed.

It’s difficult looking at the energy market and looking at one technology and saying that one technology is going to solve everything,” said Wolek. 

Rather, the company’s role is to ensure that the battery technology Fluence is deploying can be integrated with the other technologies required to provide industry and society with the power it needs, he said. “We want to absolutely be the experts on battery-based storage,” Wolek said. “At the same time we do invest quite a bit on the digital side to expand our dispatch capabilities beyond storage.”

That could mean teaming up with other energy suppliers (like developers of hydrogen fuel proejcts) in the future, he said.

“We want to master the energy piece on the battery side,” Wolek said of the company’s ultimate goal.

That goal puts the company on something of a collision course with the energy business being built by Elon Musk’s Tesla.

The billion-dollar valuation that Fluence currently enjoys and the $36.6 billion market cap that QuantumScape has goes some way toward explaining why Tesla can be considered to be a company worth over $650 billion.

 



How do you make a virtual event accessible for people who are blind or visually impaired?

When I started work on Sight Tech Global back in June this year, I was confident that we would find the answer to that question pretty quickly. With so many virtual event platforms and online ticketing options available to virtual event organizers, we were sure at least one would meet a reasonable standard of accessibility for people who use screen readers or other devices to navigate the Web.

Sadly, I was wrong about that. As I did my due diligence and spoke to CEOs at a variety of platforms, I heard a lot of “we’re studying WCAG [Web Content Accessibility Guidelines] requirements” or “our developers are going to re-write our front-end code when we have time.” In other words, these operations, like many others on the Web, had not taken the trouble to code their sites for accessibility at the start, which is the least costly and fairest approach, not to mention the one compliant with the ADA.

This realization was a major red flag. We had announced our event dates – Dec 2-3, 2020 – and there was no turning back. Dmitry Paperny, our designer, and I did not have much time to figure out a solution. No less important than the dates was the imperative that the event’s virtual experience work well for blind attendees, given that our event was really centered on that community.

We decided to take Occam’s razor to the conventions surrounding virtual event experiences and answer a key question: What was essential? Virtual event platforms tend to be feature heavy, which compounds accessibility problems. We ranked what really mattered, and the list came down to three things:

  • live-stream video for the “main stage” events
  • a highly navigable, interactive agenda
  • interactive video for the breakout sessions.

We also debated adding a social or networking element as well, and decided that was optional unless there was an easy, compelling solution.

The next question was what third-party tools could we use? The very good news was that YouTube and Zoom get great marks for accessibility. People who are blind are familiar with both and many know the keyboard commands to navigate the players. We discovered this largely by word of mouth at first and then discovered ample supporting documentation at YouTube and Zoom. So we chose YouTube for our main stage programming and Zoom for our breakouts. It’s helpful, of course, that it’s very easy to incorporate both YouTube and Zoom in a website, which became our plan.

Where to host the overall experience, was the next question. We wanted to be able to direct attendees to a single URL in order to join the event. Luckily, we had already built an accessible website to market the event. Dmitry had learned a lot in the course of designing and coding that site, including the importance of thinking about both blind and low-vision users. So we decided to add the event experience to our site itself – instead of using a third-party event platform – by adding two elements to the site navigation – Event (no longer live on the site) and Agenda.

The first amounted to a “page” (in WordPress parlance) that contained the YouTube live player embed, and beneath that text descriptions of the current session and the upcoming session, along with prominent links to the full Agenda. Some folks might ask, why place the agenda on a separate page? Doesn’t that make it more complicated? Good question, and the answer was one of many revelations that came from our partner Fable, which specializes in usability testing for people with disabilities. The answer, as we found time and again, was to imagine navigating with a screen reader, not your eyes. If the agenda were beneath the YouTube Player, it would create a cacophonous experience – imagine trying to listen to the programming and at the same time “read” (as in “listen to”) the agenda below. A separate page for the agenda was the right idea.

The Agenda page was our biggest challenge because it contained a lot of information, required filters and also, during the show, had different “states” – as in which agenda items were “playing now” versus upcoming versus already concluded. Dmitry learned a lot about the best approach to drop downs for filters and other details to make the agenda page navigable, and we reviewed it several times with Fable’s experts. We decided nonetheless to take the fairly unprecedented step of inviting our registered, blind event attendees to join us for a “practice event” a few days before the show in order to get more feedback. Nearly 200 people showed up for two sessions. We also invited blind screen reader experts, including Fable’s Sam Proulx and Facebook’s Matt King, to join us to answer questions and sort out the feedback.

It’s worth noting that there are three major screen readers: JAWS, which is used mostly by Windows’ users; VoiceOver, which is on all Apple products; and NVDA, which is open source and works on PCs running Microsoft Windows 7 SP1 and later. They don’t all work in the same way, and the people who use them range from experts who know hundreds of keyboard commands to occasional users who have more basic skills. For that reason, it’s really important to have expert interlocutors who can help separate good suggestions from simple frustrations.

The format for our open house (session one and session two) was a Zoom meeting, where we provided a briefing about the event and how the experience worked. Then we provided links to a working Event page (with a YouTube player active) and the Agenda page and asked people to give it a try and return to the Zoom session with feedback. Like so much else in this effort, the result was humbling. We had the basics down well, but we had missed some nuances, such as the best way to order information in an agenda item for someone who can only “hear” it versus “see” it. Fortunately, we had time to tune the agenda page a bit more before the show.

The practice session also reinforced that we had made a good move to offer live customer support during the show as a buffer for attendees who were less sophisticated in the use of screen readers. We partnered with Be My Eyes, a mobile app that connects blind users to sighted helpers who use the blind person’s phone camera to help troubleshoot issues. It’s like having a friend look over your shoulder. We recruited 10 volunteers and trained them to be ready to answer questions about the event, and Be My Eyes put them at the top of the list for any calls related to Sight Tech Global, which was listed under the Be My Eyes “event’ section.  Our event host, the incomparable Will Butler, who happens to be a vice-president at Be My Eyes, regularly reminded attendees to use Be My Eyes if they needed help with the virtual experience.

A month out from the event, we were feeling confident enough that we decided to add a social interaction feature to the show. Word on the street was that Slido’s basic Q&A features worked well with screen readers, and in fact Fable used the service for its projects. So we added Slido to the program. We did not embed a Slido widget beneath the YouTube player, which might have been a good solution for sighted participants, but instead added a link to each agenda session to a standalone Slido page, where attendees could add comments and ask questions without getting tangled in the agenda or the livestream.  The solution ended up working well, and we had more than 750 comments and questions on Slido during the show.

When Dec. 2 finally arrived, we were ready. But the best-laid plans often go awry, we were only minutes into the event when suddenly our live, closed-captioning broke. We decided to halt the show until we could bring that back up live, for the benefit of deaf and hard-of-hearing attendees. After much scrambling, captioning came back. (See more on captioning below).

Otherwise, the production worked well from a programming standpoint as well as accessibility. How did we do? Of the 2400+ registered attendees at the event, 45% said they planned to use screen readers. When we did a survey of those attendees immediately after the show, 95 replied and they gave the experience a 4.6/5 score. As far as the programming, our attendees (this time asked everyone – 157 replies) gave us a score of 4.7/5. Needless to say, we were delighted by those outcomes.

One other note concerned registration. At the outset, we also “heard” that one of the event registration platforms was “as good as it gets” for accessibility. We took that at face value, which was a mistake. We should have tested because comments for people trying to register as well as a low turnout of registration from blind people revealed after a few weeks that the registration site may have been better than the rest but was still really disappointing. It was painful, for example, to learn from one of our speakers that alt tags were missing from images (and there was no way to add them) and that the screen reader users had to tab through mountains of information in order to get to actionable links, such as “register.”

As we did with our approach to the website, we decided that the best course was to simplify. We added a Google Form as an alternative registration option. These are highly accessible. We instantly saw our registrations increase strongly, particularly among blind people. We were chagrined to realize that our first choice for registration had been excluding the very people our event intended to include.

We were able to use the Google Forms option because the event was free. Had we been trying to collect payment of registration fees, Google Form would not have been an option. Why did we make the event free to all attendees? There are several reasons. First, given our ambitions to make the event global and easily available to anyone interested in blindness, it was difficult to arrive at a universally acceptable price point. Second, adding payment as well as a “log-in” feature to access the event itself would create another accessibility headache. With our approach, anyone with the link to the Agenda or Event page could attend without any log-in demand or registration. We knew this would create some leakage in terms of knowing who attended the event – quite a lot in fact because we had 30% more attendees than registrants – but given the nature of the event we thought that losing out on names and emails was an acceptable price to pay considering the accessibility benefit.

If there is an overarching lesson from this exercise, it’s simply this: Event organizers have to roll up their sleeves and really get to the bottom of whether the experience is accessible or not. It’s not enough to trust platform or technology vendors, unless they have standout reputations in the community, as YouTube and Zoom do. It’s as important to ensure that the site or platform is coded appropriately (to WCAG standards, and using a tool like Google’s LightHouse) as it is to do real-world testing to ensure that the actual, observable experience of blind and low-vision users is a good one. At the end of the day, that’s what counts the most.

A final footnote. Although our event focused on accessibility issues for people who are blind or have low vision, we were committed from the start to include captions for people who would benefit. We opted for the best quality outcome, which is still human (versus AI) captioners, and we worked with VITAC to provide captions for the live Zoom and YouTube sessions and 3Play Media for the on-demand versions and the transcripts, which are now part of the permanent record. We also heard requests for “plain text” (no mark-up) versions of the transcripts in an easily downloadable version for people who use Braille-readers. We supplied those, as well. You can see how all those resources came together on pages  like this one, which contain all the information on a given session and are linked from the relevant section of the agenda.

 



How do you make a virtual event accessible for people who are blind or visually impaired?

When I started work on Sight Tech Global back in June this year, I was confident that we would find the answer to that question pretty quickly. With so many virtual event platforms and online ticketing options available to virtual event organizers, we were sure at least one would meet a reasonable standard of accessibility for people who use screen readers or other devices to navigate the Web.

Sadly, I was wrong about that. As I did my due diligence and spoke to CEOs at a variety of platforms, I heard a lot of “we’re studying WCAG [Web Content Accessibility Guidelines] requirements” or “our developers are going to re-write our front-end code when we have time.” In other words, these operations, like many others on the Web, had not taken the trouble to code their sites for accessibility at the start, which is the least costly and fairest approach, not to mention the one compliant with the ADA.

This realization was a major red flag. We had announced our event dates – Dec 2-3, 2020 – and there was no turning back. Dmitry Paperny, our designer, and I did not have much time to figure out a solution. No less important than the dates was the imperative that the event’s virtual experience work well for blind attendees, given that our event was really centered on that community.

We decided to take Occam’s razor to the conventions surrounding virtual event experiences and answer a key question: What was essential? Virtual event platforms tend to be feature heavy, which compounds accessibility problems. We ranked what really mattered, and the list came down to three things:

  • live-stream video for the “main stage” events
  • a highly navigable, interactive agenda
  • interactive video for the breakout sessions.

We also debated adding a social or networking element as well, and decided that was optional unless there was an easy, compelling solution.

The next question was what third-party tools could we use? The very good news was that YouTube and Zoom get great marks for accessibility. People who are blind are familiar with both and many know the keyboard commands to navigate the players. We discovered this largely by word of mouth at first and then discovered ample supporting documentation at YouTube and Zoom. So we chose YouTube for our main stage programming and Zoom for our breakouts. It’s helpful, of course, that it’s very easy to incorporate both YouTube and Zoom in a website, which became our plan.

Where to host the overall experience, was the next question. We wanted to be able to direct attendees to a single URL in order to join the event. Luckily, we had already built an accessible website to market the event. Dmitry had learned a lot in the course of designing and coding that site, including the importance of thinking about both blind and low-vision users. So we decided to add the event experience to our site itself – instead of using a third-party event platform – by adding two elements to the site navigation – Event (no longer live on the site) and Agenda.

The first amounted to a “page” (in WordPress parlance) that contained the YouTube live player embed, and beneath that text descriptions of the current session and the upcoming session, along with prominent links to the full Agenda. Some folks might ask, why place the agenda on a separate page? Doesn’t that make it more complicated? Good question, and the answer was one of many revelations that came from our partner Fable, which specializes in usability testing for people with disabilities. The answer, as we found time and again, was to imagine navigating with a screen reader, not your eyes. If the agenda were beneath the YouTube Player, it would create a cacophonous experience – imagine trying to listen to the programming and at the same time “read” (as in “listen to”) the agenda below. A separate page for the agenda was the right idea.

The Agenda page was our biggest challenge because it contained a lot of information, required filters and also, during the show, had different “states” – as in which agenda items were “playing now” versus upcoming versus already concluded. Dmitry learned a lot about the best approach to drop downs for filters and other details to make the agenda page navigable, and we reviewed it several times with Fable’s experts. We decided nonetheless to take the fairly unprecedented step of inviting our registered, blind event attendees to join us for a “practice event” a few days before the show in order to get more feedback. Nearly 200 people showed up for two sessions. We also invited blind screen reader experts, including Fable’s Sam Proulx and Facebook’s Matt King, to join us to answer questions and sort out the feedback.

It’s worth noting that there are three major screen readers: JAWS, which is used mostly by Windows’ users; VoiceOver, which is on all Apple products; and NVDA, which is open source and works on PCs running Microsoft Windows 7 SP1 and later. They don’t all work in the same way, and the people who use them range from experts who know hundreds of keyboard commands to occasional users who have more basic skills. For that reason, it’s really important to have expert interlocutors who can help separate good suggestions from simple frustrations.

The format for our open house (session one and session two) was a Zoom meeting, where we provided a briefing about the event and how the experience worked. Then we provided links to a working Event page (with a YouTube player active) and the Agenda page and asked people to give it a try and return to the Zoom session with feedback. Like so much else in this effort, the result was humbling. We had the basics down well, but we had missed some nuances, such as the best way to order information in an agenda item for someone who can only “hear” it versus “see” it. Fortunately, we had time to tune the agenda page a bit more before the show.

The practice session also reinforced that we had made a good move to offer live customer support during the show as a buffer for attendees who were less sophisticated in the use of screen readers. We partnered with Be My Eyes, a mobile app that connects blind users to sighted helpers who use the blind person’s phone camera to help troubleshoot issues. It’s like having a friend look over your shoulder. We recruited 10 volunteers and trained them to be ready to answer questions about the event, and Be My Eyes put them at the top of the list for any calls related to Sight Tech Global, which was listed under the Be My Eyes “event’ section.  Our event host, the incomparable Will Butler, who happens to be a vice-president at Be My Eyes, regularly reminded attendees to use Be My Eyes if they needed help with the virtual experience.

A month out from the event, we were feeling confident enough that we decided to add a social interaction feature to the show. Word on the street was that Slido’s basic Q&A features worked well with screen readers, and in fact Fable used the service for its projects. So we added Slido to the program. We did not embed a Slido widget beneath the YouTube player, which might have been a good solution for sighted participants, but instead added a link to each agenda session to a standalone Slido page, where attendees could add comments and ask questions without getting tangled in the agenda or the livestream.  The solution ended up working well, and we had more than 750 comments and questions on Slido during the show.

When Dec. 2 finally arrived, we were ready. But the best-laid plans often go awry, we were only minutes into the event when suddenly our live, closed-captioning broke. We decided to halt the show until we could bring that back up live, for the benefit of deaf and hard-of-hearing attendees. After much scrambling, captioning came back. (See more on captioning below).

Otherwise, the production worked well from a programming standpoint as well as accessibility. How did we do? Of the 2400+ registered attendees at the event, 45% said they planned to use screen readers. When we did a survey of those attendees immediately after the show, 95 replied and they gave the experience a 4.6/5 score. As far as the programming, our attendees (this time asked everyone – 157 replies) gave us a score of 4.7/5. Needless to say, we were delighted by those outcomes.

One other note concerned registration. At the outset, we also “heard” that one of the event registration platforms was “as good as it gets” for accessibility. We took that at face value, which was a mistake. We should have tested because comments for people trying to register as well as a low turnout of registration from blind people revealed after a few weeks that the registration site may have been better than the rest but was still really disappointing. It was painful, for example, to learn from one of our speakers that alt tags were missing from images (and there was no way to add them) and that the screen reader users had to tab through mountains of information in order to get to actionable links, such as “register.”

As we did with our approach to the website, we decided that the best course was to simplify. We added a Google Form as an alternative registration option. These are highly accessible. We instantly saw our registrations increase strongly, particularly among blind people. We were chagrined to realize that our first choice for registration had been excluding the very people our event intended to include.

We were able to use the Google Forms option because the event was free. Had we been trying to collect payment of registration fees, Google Form would not have been an option. Why did we make the event free to all attendees? There are several reasons. First, given our ambitions to make the event global and easily available to anyone interested in blindness, it was difficult to arrive at a universally acceptable price point. Second, adding payment as well as a “log-in” feature to access the event itself would create another accessibility headache. With our approach, anyone with the link to the Agenda or Event page could attend without any log-in demand or registration. We knew this would create some leakage in terms of knowing who attended the event – quite a lot in fact because we had 30% more attendees than registrants – but given the nature of the event we thought that losing out on names and emails was an acceptable price to pay considering the accessibility benefit.

If there is an overarching lesson from this exercise, it’s simply this: Event organizers have to roll up their sleeves and really get to the bottom of whether the experience is accessible or not. It’s not enough to trust platform or technology vendors, unless they have standout reputations in the community, as YouTube and Zoom do. It’s as important to ensure that the site or platform is coded appropriately (to WCAG standards, and using a tool like Google’s LightHouse) as it is to do real-world testing to ensure that the actual, observable experience of blind and low-vision users is a good one. At the end of the day, that’s what counts the most.

A final footnote. Although our event focused on accessibility issues for people who are blind or have low vision, we were committed from the start to include captions for people who would benefit. We opted for the best quality outcome, which is still human (versus AI) captioners, and we worked with VITAC to provide captions for the live Zoom and YouTube sessions and 3Play Media for the on-demand versions and the transcripts, which are now part of the permanent record. We also heard requests for “plain text” (no mark-up) versions of the transcripts in an easily downloadable version for people who use Braille-readers. We supplied those, as well. You can see how all those resources came together on pages  like this one, which contain all the information on a given session and are linked from the relevant section of the agenda.

 



Amazon just announced that it’s acquiring Wondery, the network behind podcasts including “Dirty John” and “Dr. Death.”

Wondery will become part of Amazon Music, which added support for podcasts (including its own original shows) in September. At the same time, the announcement claims that “nothing will change for listeners” and that the network’s podcasts will continue to be available from “a variety of providers.”

Media companies and streaming audio platforms are all making big bets on podcasting, with Spotify making a series of acquisitions including podcast network Gimlet, SiriusXM acquiring Stitcher and The New York Times acquiring Serial Productions. Amazon is coming relatively late to this market, but it will now have the support of a popular podcast maker as it works to catch up.

“With Amazon Music, Wondery will be able to provide even more high-quality, innovative content and continue their mission of bringing a world of entertainment and knowledge to their audiences, wherever they listen,” Amazon wrote.

Financial terms were not disclosed. The Wall Street Journal previously reported that acquisition talks were in the works, and that those talks valued Wondery at around $300 million.

The startup was founded in 2016 by former Fox executive Hernan Lopez (who’s currently fighting federal corruption charges tied to his time at Fox). Numbers from Podtrac rank it as the fourth largest podcast publisher in November, with an audience in the U.S. of more than 9 million unique listeners.

Wondery has raised a total of $15 million in funding from Advancit Capital, BDMI, Greycroft, Lerer Hippeau and others, according to Crunchbase.

 



Here’s another edition of “Dear Sophie,” the advice column that answers immigration-related questions about working at technology companies.

“Your questions are vital to the spread of knowledge that allows people all over the world to rise above borders and pursue their dreams,” says Sophie Alcorn, a Silicon Valley immigration attorney. “Whether you’re in people ops, a founder or seeking a job in Silicon Valley, I would love to answer your questions in my next column.”

Extra Crunch members receive access to weekly “Dear Sophie” columns; use promo code ALCORN to purchase a one- or two-year subscription for 50% off.


Dear Sophie:

I’m working in the Bay Area on an H-1B visa and my employer won’t sponsor my green card.

I really want permanent residence, but I never won a Nobel prize; I’m single; and I don’t have a million dollars yet. However, I think I might qualify for an EB-2 NIW green card.

What can you share?

— National in Napa

Dear National:

Wonderful that you’re taking matters into your own hands! This is a complicated process, so the most important advice I can give you is to retain an experienced business immigration attorney to represent you and prepare and file your green card case.

For additional do’s and don’ts in U.S. immigration, please check out the recent podcast that my law firm partner, Anita Koumriqian, and I posted on the commandments of immigration (and especially what to not do when it comes to visas and green cards).

This particular episode focuses on family-based green cards, but these recommendations are timeless and apply to individuals who are self-petitioning for employment-based green cards, such as the EB-2 NIW (National Interest Waiver) for exceptional ability and the EB-1A for extraordinary ability. Our top recommendation in that podcast episode is to avoid DIY immigration, so definitely retain legal counsel!

Filing for an EB-2 NIW or any green card requires more than just filling out the appropriate forms. The process needs to be understood, as the law and legal requirements, and the analysis of whether and how you can best qualify is complicated.

With any immigration matter, one needs to have the resources to fully understand the process, the steps for applying, and the timing and deadlines. We want to always make sure that you always maintain legal status (never falling out of status) so that you can remain in the U.S. (and don’t have to leave).



This has been the year of the social organization. As the COVID-19 pandemic swept across the world and the United States, governments and a patchwork of nonprofits and volunteer organizations sprang into action, offering everything from food and medical supplies to children’s books and clothing to individuals and families struggling in the virus’s wake.

Perhaps the biggest divide though to getting people help has been digital — non-profits need to connect with their beneficiaries over the internet just as much as any retailer today. Unfortunately, tech talent is expensive and hard to find, particularly for often cash-strapped nonprofits.

That was part of the impetus for two Stanford seniors, Mary Zhu and Amay Aggarwal, to co-found Develop for Good, a matching service designed to connect motivated and ambitious undergrads in computer science, design and economics to nonprofits with specific projects that require expertise. They launched the network in March as the pandemic started spreading rapidly, and since then, the organization has itself started growing exponentially as well.

Develop for Good “was in response to [the pandemic], but at the same time, a lot of our peers were having their internships canceled, [and] a lot of companies were having hiring freezes,” Zhu explained. “People were also seeking opportunities to be able to develop their professional skills and develop their project experience.” This coincidence of needs among both students and nonprofits helped accelerate the matching that Develop for Good offers.

So far, the 501(c)(3) non-profit has coordinated more than 25,000 volunteer hours across groups like the Ronald McDonald House, UNICEF, the Native American Rights Fund (NARF), Easterseals, The Nature Conservancy, Save the Children, AARP and more. The program, which in its first batch focused on Zhu and Aggarwal’s network at Stanford, has since expanded to more than a dozen schools across the United States. The two first reached out to nonprofits through Stanford’s alumni network, although as the program’s reputation has grown, they have started getting inbound interest as well.

Volunteers take on a project for 5-10 hours per week for 10 weeks, typically in teams. Each team meets their nonprofit client at least weekly to ensure the project matches expectations. Typical projects include application development, data visualization, and web design. Most projects conclude at the end of the batch, although the founders note that some in-depth projects like product development can cross over into future batches. As the program has expanded, Zhu and Aggarwal have added a more formal mentorship component to the program to help guide students through their work.

Applications for the next batch starting in January are currently open for students (they’re due January 2nd, so get them in quick!). The founders told me that they are expecting 800 applications, and are likely going to be able to match about 200 volunteers to 32 projects. Applications are mostly about matching interests with potential programs for the best fit, rather than a purely competitive exercise. So far, the program has worked on 50 projects to date.

For this next batch, Amazon Web Services will sponsor a stipend for first-generation and low-income students to help defray the financial impact of volunteer work for some students. “Over the past cycle, a few people had to drop out because they said, ‘they’re unable to work for free because they’re having a lot of financial stress for their families’,” Aggarwal said. The new stipend is meant to help these students continue to volunteer while alleviating some of that financial burden.

Aggarwal said that two-thirds of the program’s volunteer developers and designers are female, and one-third are first-generation or low-income.



Tech got dragged into yet another irrelevant Congressional scuffle this week after President Trump agreed to sign a bipartisan pandemic relief package but continued to press for additional $2,000 checks that his party opposed during negotiations.

In tweets and other comments, Trump tied a push for the boosted relief payments to his entirely unrelated demand to repeal Section 230 of the Communications Decency Act, a critical but previously obscure law that protects internet companies from legal liability for user-generated content.

The political situation was complicated further after Republicans in Georgia’s two extremely high stakes runoff races sided with Trump over the additional checks rather than the majority of Republicans in Congress.

In a move that’s more a political maneuver than a real stab at tech regulation,  Senate Majority Leader Mitch McConnell introduced a new bill late Tuesday linking the $2,000 payments Republicans previously blocked to an outright repeal of Section 230 — a proposal that’s sure to be doomed in Congress.

McConnell’s bill humors the president’s eclectic cluster of demands while creating an opportunity for his party to look unified, sort of, in the face of the Georgia situation. The proposal also tosses in a study on voter fraud, not because it’s relevant but because it’s another pet issue that Trump dragged into the whole mess.

Over the course of 2020, Trump has repeatedly returned to the ideal of revoking Section 230 protections as a cudgel he can wield against tech companies, particularly Twitter when the platform’s rules result in his own tweets being downranked or paired with misinformation warnings.

 

If the latest development sounds confusing that’s because it is. Section 230 and the stimulus legislation have nothing at all to do with one another. And we were just talking about Section 230 in relation to another completely unrelated bit of legislation, a huge annual defense spending bill called the NDAA.

Last week Trump decided to veto that bill, which enjoyed broad bipartisan support because it funds the military and does other mostly uncontroversial stuff, on the grounds that it didn’t include his totally unrelated demand to strip tech companies of their Section 230 protections. Trump’s move was pretty much out of left field, but it opened the door for Democrats to leverage their cooperation in a two-thirds majority to override Trump’s veto for other stuff they want right now, namely those $2,000 stimulus checks for Americans. Sen. Bernie Sanders is attempting to do just that.

Unfortunately, McConnell’s move here is mostly a cynical one, to the detriment of Americans in financial turmoil. An outright repeal of Section 230 is a position without much if any support among Democrats. And while closely Trump-aligned Republicans have flirted with the idea of stripping online platforms of the legal shield altogether, some flavor of reform is what’s been on the table and what’s likely to get hashed out in 2021.

For lawmakers who understand the far-reaching implications of the law, reform rather than a straight up repeal was always a more likely outcome. In the extraordinarily unlikely event that Section 230 gets repealed through this week’s strange series of events, many of the websites, apps and online services that people rely on would be thrown into chaos. Without Section 230’s liability protections, websites from Yelp to Fox News would be legally responsible for any user-generated reviews and comments they host. If an end to comments sections doesn’t sound so bad, imagine an internet without Amazon reviews, tweets and many other byproducts of the social internet.

The thing is, it’s not going to happen. McConnell doesn’t want Americans to receive the additional $2,000 checks and Democrats aren’t going to be willing to secure the funds by agreeing to a totally unrelated last-minute proposal to throw out the rules of the internet, particularly with regulatory pressure on tech mounting and more serious 230 reform efforts still underway. The proposed bill is also not even guaranteed to come up for a vote in the waning days of this Congressional session.

The end result will be that McConnell humors the president by offering him what he wanted, kind of, Democrats look bad for suddenly opposing much-needed additional stimulus money and Americans in the midst of a deadly and financially devastating crisis probably don’t end up with more money in their pockets. Not great.



Marketers don’t grow up daydreaming about risk management and compliance. Personally, I never gave governance, risk or compliance (GRC) a second thought outside of making sure my team completed required compliance or phishing training from time to time.

So, when I was tasked with leading the General Data Protection Regulation (GDPR) compliance initiative at a previous employer, I was far from my comfort zone.

What I thought were going to be a few, small requirements regarding how and when we sent emails to contacts based in Europe quickly turned into a complete overhaul of how the organization collected, processed and protected personally identifiable information (PII).

It is a risk leader’s job to facilitate conversations around risk and help guide business unit leaders to finding their own risk appetites.

As it turned out, I had completely underestimated the scope and importance of the project. My first mistake? Assuming compliance was “someone else’s issue.”

Risk management is a team sport

No single risk leader can alone assess, manage and resolve an organization’s risk cap. Without active involvement from business unit leaders across the company in marketing, human resources, sales and more, a company can never have a healthy risk-aware culture.

Leaders successful at developing that culture instill a company-wide team mentality with well-defined objectives, a clear scope and an agreed-upon allocation of responsibility. Ultimately, you need buy-in similar to the way a football coach needs players to buy into the team’s culture and plays for peak performance. While the company’s risk managers may be the quarterbacks when it comes to GRC, the team won’t win without key plays by linemen (sales), running backs (marketing) and receivers (procurement).

It is a risk leader’s job to facilitate conversations around risk and help guide business unit leaders to finding their own risk appetites. It’s not their job to define acceptable levels of risk for us, which is why CMOs, HR and sales leaders have no choice but to take an active role in defining risk for their departments.

Shifting my view on risk management

If I am being honest, I only used to think about risk management in terms of asset protection and cost reduction. My crash course in risk responsibility opened my eyes to the many ways GRC can actually speed deals and furthermore, drive revenue.



Spyware maker NSO Group used real phone location data on thousands of unsuspecting people when it demonstrated its new COVID-19 contact-tracing system to governments and journalists, researchers have concluded.

NSO, a private intelligence company best known for developing and selling governments access to its Pegasus spyware, went on the charm offensive earlier this year to pitch its contact-tracing system, dubbed Fleming, aimed at helping governments track the spread of COVID-19. Fleming is designed to allow governments to feed location data from cell phone companies to visualize and track the spread of the virus. NSO gave several news outlets each a demo of Fleming, which NSO says helps governments make public health decisions “without compromising individual privacy.”

But in May, a security researcher told TechCrunch that he found an exposed database storing thousands of location data points used by NSO to demonstrate how Fleming works — the same demo seen by reporters weeks earlier.

TechCrunch reported the apparent security lapse to NSO, which quickly secured the database, but said that the location data was “not based on real and genuine data.”

NSO’s claim that the location data wasn’t real differed from reports in Israeli media, which said NSO had used phone location data obtained from advertising platforms, known as data brokers, to “train” the system. Academic and privacy expert Tehilla Shwartz Altshuler, who was also given a demo of Fleming, said NSO told her that the data was obtained from data brokers, which sell access to vast troves of aggregate location data collected from the apps installed on millions of phones.

TechCrunch asked researchers at Forensic Architecture, an academic unit at Goldsmiths, University of London that studies and examines human rights abuses, to investigate. The researchers published their findings on Wednesday, concluding that the exposed data was likely based on real phone location data.

The researchers said if the data is real, then NSO “violated the privacy” of 32,000 individuals across Rwanda, Israel, Bahrain, Saudi Arabia and the United Arab Emirates — countries that are reportedly customers of NSO’s spyware.

The researchers analyzed a sample of the exposed phone location data by looking for patterns they expected to see with real people’s location data, such as a concentration of people in major cities and by measuring the time it took for individuals to travel from one place to another. The researchers also found spatial irregularities that would be associated with real data, such as star-like patterns that are caused by a phone trying to accurately pinpoint its location when the line of sight to the satellite is obstructed by tall buildings.

“The spatial ‘irregularities’ in our sample — a common signature of real mobile location tracks — further support our assessment that this is real data. Therefore, the dataset is most likely not ‘dummy’ nor computer generated data, but rather reflects the movement of actual individuals, possibly acquired from telecommunications carriers or a third-party source,” the researchers said.

The researchers built maps, graphs, and visualizations to explain their findings, while preserving the anonymity of the individuals whose location data was fed into NSO’s Fleming demo.

Gary Miller, a mobile network security expert and founder of cyber intelligence firm Exigent Media, reviewed some of the datasets and graphs, and concluded it was real phone location data.

Miller said the number of data points increased around population hubs. “If you take a scatter plot of cell phone locations at a given point in time, there will be consistency in the number of points in suburban versus urban locations,” he said. Miller also found evidence of people traveling together, which he said “looked consistent with real phone data.”

He also said that even “anonymized” location data sets can be used to tell a lot about a person, such as where they live and work, and who they visit. “One can learn a lot of details about individuals simply by looking at location movement patterns,” he said.

“If you add up all of the similarities it would be very difficult to conclude that this was not actual mobile network data,” he said.

A timeline of one person’s location data in Bahrain over a three-week period. Researchers say these red lines represent travel that seems plausible within the indicated time. (Image: Forensic Architecture/supplied)

John Scott-Railton, a senior researcher at Citizen Lab, said the data likely originated from phone apps that use a blend of direct GPS data, nearby Wi-Fi networks, and the phone’s in-built sensors to try to improve the quality of the location data. “But it’s never really perfect,” he said. “If you’re looking at advertising data — like the kind that you buy from a data broker — it would look a lot like this.”

Scott-Railton also said that using simulated data for a contact-tracing system would be “counterproductive,” as NSO would “want to train [Fleming] on data that is as real and representative as possible.”

“Based on what I saw, the analysis provided by Forensic Architecture is consistent with the previous statements by Tehilla Shwartz Altshuler,” said Scott-Railton, referring to the academic who said NSO told her that was based on real data.

“The whole situation paints a picture of a spyware company once more being cavalier with sensitive and potentially personal information,” he said.

NSO rejected the researchers’ findings.

“We have not seen the supposed examination and have to question how these conclusions were reached. Nevertheless, we stand by our previous response of May 6, 2020. The demo material was not based on real and genuine data related to infected COVID-19 individuals,” said an unnamed spokesperson. (NSO’s earlier statement made no reference to individuals with COVID-19.)

“As our last statement details, the data used for the demonstrations did not contain any personally identifiable information (PII). And, also as previously stated, this demo was a simulation based on obfuscated data. The Fleming system is a tool that analyzes data provided by end users to help healthcare decision-makers during this global pandemic. NSO does not collect any data for the system, nor does NSO have any access to collected data.”

NSO did not answer our specific questions, including where the data came from and how it was obtained. The company claims on its website that Fleming is “already being operated by countries around the world,” but declined to confirm or deny its government customers when asked.

Contact Us

Got a tip? Contact us securely using SecureDrop. Find out more here.

The Israeli spyware maker’s push into contact tracing has been seen as a way to repair its image, as the company battles a lawsuit in the United States that could see it reveal more about the governments that buy access to its Pegasus spyware.

NSO is currently embroiled in a lawsuit with Facebook-owned WhatsApp, which last year blamed NSO for exploiting an undisclosed vulnerability in WhatsApp to infect some 1,400 phones with Pegasus, including journalists and human rights defenders. NSO says it should be afforded legal immunity because it acts on behalf of governments. But Microsoft, Google, Cisco, and VMware filed an amicus brief this week in support of WhatsApp, and calling on the court to reject NSO’s claim to immunity.

The amicus brief came shortly after Citizen Lab found evidence that dozens of journalists were also targeted with Pegasus spyware by NSO customers, including Saudi Arabia and the United Arab Emirates. NSO disputed the findings.



MKRdezign

Contact Form

Name

Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget