Pacific Leaders Sign on to Australian Internet Cabling Scheme, Shutting Out China

Pacific nations Papua New Guinea and the Solomon Islands have signed on to a joint undersea internet cable project, funded mostly by Australia, that forestalls plans by Chinese telecom giant Huawei Technologies Co Ltd to lay the links itself.

Wednesday’s pact comes as China pushes for influence in a region Australia views as its backyard, amid souring ties after Prime Minister Malcolm Turnbull last year accused Beijing of meddling in Canberra’s affairs.

Australia will pay two-thirds of the project cost of A$136.6 million ($100 million) under the deal, signed on a visit to Brisbane by Solomon Islands Prime Minister Rick Houenipwela and Papua New Guinea Prime Minister Peter O’Neill.

“We spend billions of dollars a year on foreign aid and this is a very practical way of investing in the future economic growth of our neighbors in the Pacific,” Turnbull told reporters about the deal.

The project, for which Australian telecom firm Vocus Group Ltd is building the cable, will link the two nations to the Australian mainland, besides connecting the Solomons capital Honiara with the archipelago’s outer islands.

For years, Western intelligence agencies have worried over Huawei’s ties to the Chinese government and the possibility that its equipment could be used for espionage.

Australia, which is poised to ban Huawei from its domestic 5G mobile network on the advice of its intelligence services, raised “concerns” that scuppered a Huawei offer for cabling to the Solomons, Houenipwela has previously told the Australian Broadcasting Corp.

Huawei has said it was never informed of any security problems with its planned cables for the Solomons, where Chinese activity has attracted additional attention, as it is one of six countries in the Pacific to maintain ties with Taiwan. 

China claims self-ruled Taiwan as its own and has never renounced the use of force to bring under its control what it sees as a wayward province.

In Purge, Twitter Removing ‘Suspicious’ Followers

Social networking platform Twitter announced Wednesday it will be removing accounts it had deemed suspicious from user’s follower counts, as part of a recent push to promote accuracy on the website. This could reduce the number of “followers” of some of the website’s most popular users, including politicians and celebrities.

The website had locked accounts of users where Twitter “detected sudden changes in account behavior,” such as sharing misleading links, being blocked by a large number of accounts that account had interacted with, or a large number of unsolicited replies to other users’ tweets, Twitter general counsel Vijaya Gadde wrote. The accounts are locked, preventing one from logging in and using the account until the account’s owner verified their use.

Wednesday’s change will remove these locked accounts from users’ follower counts, which are visible on a user’s account page and often are used as a barometer of an individual’s sway on the website, which 336 million users log into every month, according to USA Today.

Gadde wrote that while the average Twitter user will see their follower count drop only by about four, popular accounts could see a more dramatic drop in the number of their followers.

In the wake of reports that Russia had used fake accounts platform to help sow discord in the American public in the lead-up and aftermath of the 2016 U.S. presidential election, Twitter CEO Jack Dorsey pledged in March 2018 to help clean up the website.

And on Friday, The Washington Post reported that Twitter had suspended more than a million accounts a day in recent months — upward of 70 million in the months of May and June 2018 alone.

“I wish Twitter had been more proactive sooner,” Sen. Mark Warner [D-Virginia] the top Democrat on the Senate Intelligence Committee, told the Post. “I’m glad that — after months of focus on this issue — Twitter appears to be cracking down on the use of bots and other fake accounts, though there is still much work to do.”

Following the Post’s report, U.S. President Donald Trump, who often was the recipient of support from Russian-linked accounts, posted this tweet:

One such Twitter account suspended in 2017, @TEN_GOP, purporting to be related to the Tennessee Republican Party, had its tweets shared on the platform by Trump White House officials such as Kellyanne Conway and former National Security Adviser Michael Flynn.

In February 2018, special counsel Robert Mueller, who is investigating Russian influence in the Trump campaign and the 2016 election, named the account in an indictment, alleging it was one of many on social media that “primarily intended to communicate derogatory information about Hillary Clinton, to denigrate other candidates such as Ted Cruz and Marco Rubio, and to support Bernie Sanders and then-candidate Donald Trump.”

Facebook Faces First Fine in Data Scandal Involving Cambridge Analytica

Facebook will be facing its first fine in the wake of the Cambridge Analytica scandal, in which the social media platform allowed the data mining firm to access the private information of millions of users without their consent or knowledge.

A British government investigative office, the Information Commissioner’s Office (ICO), fined Facebook 500,000 pounds, or $663,000 – the maximum amount that can be levied for the violation of British data privacy laws. In a report, the ICO found Facebook had broken the law in failing to protect the data of the estimated 87 million users affected by the security breach.

The ICO’s investigation concluded that Facebook “contravened the law by failing to safeguard people’s information,” the report read. It also found that the company failed to be transparent about how people’s data was harvested by others on its platform.

Cambridge Analytica, a London firm that shuttered its doors in May following a report by The New York Times and The Observer chronicling its dealings, offered “tools that could identify the personalities of American voters and influence their behavior,” according to a March Times report.

“New technologies that use data analytics to micro-target people give campaign groups the ability to connect with individual voters,” Information Commissioner Elizabeth Denham said in a statement. “But this cannot be at the expense of transparency, fairness and compliance with the law.”

The firm, which U.S. President Donald Trump employed during his successful 2016 election campaign, was heavily funded by American businessman Robert Mercer, who is also a major donor to the U.S. Republican Party. Former Trump White House adviser Steve Bannon was also employed by the firm and has said he coined the company’s name.

Christopher Wylie, a whistleblower within the firm, told the Times in March that the firm aimed to create psychological profiles of  American voters and use those profiles to target them via advertising.

“[Cambridge Analytica’s leaders] want to fight a culture war in America,” Wylie told the Times. “Cambridge Analytica was supposed to be the arsenal of weapons to fight that culture war.”

While this is the first financial penalty Facebook will be facing in the scandal, the fine will not make a dent in the company’s profits. The social media giant generated $11.97 billion in revenue in the first quarter, and generates the revenue needed to pay the fine about every 10 minutes.

Denham said the company will have an opportunity to respond to the fine before a final decision is made. Facebook has said it will respond to the ICO report soon.

“As we have said before, we should have done more to investigate claims about Cambridge Analytica and taken action in 2015,” said Erin Egan, Facebook’s chief privacy officer, in a statement. “We have been working closely with the Information Commissioner’s Office in their investigation of Cambridge Analytica, just as we have with authorities in the U.S. and other countries.”

The statement from the ICO also announced that the office would seek to criminally prosecute SCL Elections Ltd., Cambridge Analytica’s parent company, for failing to comply with a legal request from a U.S. professor to disclose what data the company had on him. SCL Elections also shut down in May.

“Your data is yours and you have a right to control its use,” wrote David Carroll, the professor.

The ICO said it would also be asking 11 political parties to conduct audits of their data protection processes, and compel SCL Elections to comply with Carroll’s request.

Further investigations by agencies such as the U.S. Federal Bureau of Investigation, or FBI, and Securities and Exchange Commission, the SEC, are under way. In April, Facebook founder and CEO Mark Zuckerberg appeared before a U.S. Senate committee to testify on the company’s actions in the scandal.

“We didn’t take a broad enough view of our responsibility, and that was a big mistake,” Zuckerberg told U.S. lawmakers in prepared remarks in April. He also said, “It was my mistake, and I’m sorry.”

Former Apple Engineer Charged With Stealing Self-driving Car Technology

A federal court has charged a former Apple engineer with stealing trade secrets related to a self-driving car and attempting to flee to China.

Agents in San Jose, California, arrested Xiaolang Zhang on Saturday, moments before he was to board his flight.

Zhang is said to have taken paternity leave in April, traveling to China just after the birth of a child.

When he returned, he informed his supervisors he was leaving Apple to join Xiaopeng Motors, a Chinese company in Guangzhao, which also plans to build self-driving cars.

But security cameras caught Zhang allegedly entering Apple’s self-driving car lab and downloading blueprints and other information on a personal computer at the time he was supposed to be in China on paternity leave.

Neither the FBI nor Zhang’s lawyers have commented.

As Technology Advances, Women Are Left Behind in Digital Divide

Poverty, gender discrimination and digital illiteracy are leaving women behind as the global workforce increasingly uses digital tools and other technologies, experts warned Tuesday.

The so-called “digital divide” has traditionally referred to the gap between those who have access to computers and the internet, and those with limited or no access.

But technology experts say women and girls with poor digital literacy skills will be the hardest hit and will struggle to find jobs as technology advances.

“Digital skills are indispensable for girls and young women to obtain safe employment in the formal labor market,” said Lindsey Nefesh-Clarke, founder of Women’s Worldwide Web, a charity that trains girls in digital literacy.

She said “offline factors” like poverty, gender discrimination and gender stereotypes were preventing girls and women from benefiting from digital technologies.

Globally, the proportion of men using the internet in 2017 was 12 percent higher than women, says the International Telecommunication Union, a United Nations agency.

There are also 200 million fewer women than men who own a mobile phone, the Organization for Economic Co-operation and Development said in a March report.

“Women are currently on the wrong side of the digital skills gap. In tech, it’s a man’s world. We have a global problem, we have an urgent problem on our hands,” said Nefesh-Clarke at a gender equality forum run by Chatham House in London on Tuesday.

According to a 2017 study by the Brookings Institution, a U.S. think tank, the use of digital tools has increased in 517 of 545 occupations since 2002 in the United States alone, with a striking uptick in many lower-skilled occupations.

“The entire economy is shifting, and we need new skills to be able to cope with that new economy,” said Dorothy Gordon, a technology expert and associate fellow with Chatham House.

“So when we look at the jobs that women are in today, what are the skillsets that they will need to acquire to be able to be competitive in that job market as we move forward?” she said.

Even with new jobs emerging through online or mobile platforms, such as rideshare apps Uber or Lyft, domestic services or food couriers, women are still faring worse than men, research shows.

A U.S. study by the National Bureau of Economic Research in June found the gender pay gap among Uber drivers was 7 percent.

“Many of the challenges that come through digital work are, frankly, old wine in new bottles,” said Abigail Hunt, a gender researcher at the British-based Overseas Development Institute, referring to the Uber study.

She said safety concerns, gender bias and discrimination contributed to how much women could earn in the so-called “gig economy.”

“Discrimination based on gender, ethnicity, geographical location, age — it’s the same issues we’ve always seen that are discriminating against women,” Hunt said.

WhatsApp Launches Campaign in India to Spot Fake Messages

After hoax messages on WhatsApp fueled deadly mob violence in India, the Facebook-owned messaging platform published full-page advertisements in prominent English and Hindi language newspapers advising users on how to spot misinformation.

The advertisements are the first measure taken by the social media company to raise awareness about fake messages, following a warning by the Indian government that it needs to take immediate action to curb the spread of false information.


While India is not the only country to be battling the phenomenon of fake messaging on social media, it has taken a menacing turn here — in the past two months more than a dozen people have died in lynchings sparked by false posts spread on WhatsApp that the victims were child kidnappers.


Ironically, the digital media giant took recourse to traditional print media to disseminate its message. The advertisements, which began with the line “Together we can fight false information” give 10 tips on how to sift truth from rumors and will also be placed in regional language newspapers.


They call on users to check photos in messages carefully because photos and videos can be edited to mislead; check out stories that seem hard to believe; to “think twice before sharing a post that makes you angry and upset”; check out other news websites or apps to see if the story is being reported elsewhere. It also warned that fake news often goes viral and asked people not to believe a message just because it is shared many times.

Internet experts called the media blitz a good first step, but stressed the need for a much larger initiative to curb the spread of fake messages that authorities are struggling to tackle.


“There has to be a repetitive pattern. People have to be told again and again and again,” says Pratik Sinha who runs a fact checking website called Alt News and hopes that the social media giant will run a sustained campaign. “That kind of fear mongering that has gone on on WhatsApp, that is not going to go away by just putting out an advertisement one day a year. This needs a continuous form of education.”


Some pointed out that although newspapers are popular in India, many of the users of the messaging platform, specially in rural areas, were unlikely to be newspaper readers.

The fake posts that have spread on WhatsApp have ranged from sensationalist warnings of natural calamities, fake stories with political messaging to bogus medical advise. The false messages that warned parents about child abductors were sometimes accompanied by gruesome videos of child abuse.


Experts said the that the need to curb fake news has also assumed urgency ahead of India’s general elections scheduled for next year — WhatsApp has become the favored medium for political parties to target voters. With about 200 million users, India is its largest market for the messaging service.

New Startup Brings Robotics into Seniors’ Homes

Senior citizens – adults 65 and older – will outnumber children in the United States for the first time by 2035, according to the U.S. Census Bureau.As their number increases, the demand for elder care is also growing.

For the past 12 years, SenCura has been providing non-medical in-home care for this segment of the population in Northern Virginia.Company founder Cliff Glier says its services “include things as bathing, dressing, companionship, meal planning and prep and transportation, pretty much everything in and around the home that seniors typically need help with.” 

Hollie, one of SenCura’s professional caregivers, visits 88-year-old Olga Robertson every day for three hours.She cooks for her, takes her to appointments, plays some brain games with her and goes walking with her around the neighborhood or in the mall.

But when Hollie is not around, Robertson still has company: a robot named Rudy.“You can have a conversation with him,” Robertson says.”He’s somebody you talk to and he responds.”

He also provides entertainment, telling her jokes, playing games and dancing with her.

In addition to keeping her mentally and physically engaged, Rudy provides access to emergency services around the clock, keeps track of misplaced items and reminds her about appointments and when it’s time for her medicine.The robot stands a bit over a meter high, and has a digital screen embedded in its torso, for virtual check-ins with family and care-givers.


Robertson has actually introduced Rudy to her neighbors.“I kind of became famous in the neighborhood because of this robot.”

The caregiver who helps caregivers

Anthony Nunez is founder of INF Robotics, the startup that created Rudy.He says the idea behind the robotic caregiver was inspired by what his mother went through, when his grandmother got older and needed help.

“As I grew older, I realized we weren’t the only family facing this problem,” Nunez recalls.“There are thousands of families facing the same issue.Most cases are even worse where they have the loved one taking care of and the cost becomes an issue.So what we wanted to do was design a robot that’s easy to use, designed especially for seniors, but also affordable.”

Nunez says technology helps seniors age in place, well-taken care of.

“We’re leveraging the artificial intelligence within our platform to help seniors make better decisions, to allow them stay in their home,” he explains.“We’re also working on machine learning on a platform and some cognitive computing to identify patterns within the seniors’ daily habits that could lead to an adverse event, and identifying those ahead of time, then using our cloud computing on a platform to get that info to caregivers before something happens.”

Carla Rodriguez has been working with Nunez’s company since it was founded.She says Rudy’s simple design makes it easy to use.The company also consults their potential customers to decide which features they need most in a robotic caregiver.


“We always have seniors involved and every time we had some type of communication we would introduce it,” she says.“Seniors would give us their feedback, ‘We don’t like this, we don’t like that,’ we come in and change it.”


Cooperation vs. competition

SenCura’s Cliff Glier met Nunez and his team at an event more than a year ago.He became interested in introducing Rudy to his customers.

“We are dealing with older adults that are typically 80, 90, 100 years old,” he says.“So this kind of technology is very new to them, so there will be some closer looks at it.People, I would say, would be interested once they learn more, we have the opportunity to show them Rudy and its capabilities.”


Rudy is not competition for human caregivers, Glier says.“He’s around to help out, where the caregivers typically would come in, may help with bathing or dressing, things at this point Rudy can’t do, but beyond that, Rudy simply fills the growing gap.”

The robot supplements what in-home caregivers do for the growing population of seniors who prefer to age in place – with a little help from some friends.

New Startup Brings Robotics into Seniors Homes

In this age of the smart machine, robots are increasingly playing roles in different fields, from construction and hospitality to the military and art. When it comes to caregiving for the elderly, which depends mainly on human interaction, it turns out robots can also help. But will they replace humans? Faiza Elmasry went searching for an answer. Faith Lapidus narrates.

YouTube Aims to Crack Down on Fake News, Support Journalism

Google’s YouTube says it is taking several steps to ensure the veracity of news on its service by cracking down on misinformation and supporting news organizations.


The company said Monday it will make “authoritative” news sources more prominent, especially in the wake of breaking news events when misinformation can spread quickly.


At such times, YouTube will begin showing users short text previews of news stories in video search results, as well as warnings that the stories can change. The goal is to counter the fake videos that can proliferate immediately after shootings, natural disasters and other major happenings. For example, YouTube search results prominently showed videos purporting to “prove” that mass shootings like the one that killed at least 59 in Las Vegas were fake, acted out by “crisis actors.”


In these urgent cases, traditional video won’t do, since it takes time for news outlets to produce and verify high-quality clips. So YouTube aims to short-circuit the misinformation loop with text stories that can quickly provide more accurate information. Company executives announced the effort at YouTube’s New York offices.


Those officials, however, offered only vague descriptions of which sources YouTube will consider authoritative. Chief Product Officer Neal Mohan said the company isn’t just compiling a simple list of trusted news outlets, noted that the definition of authoritative is “fluid” and then added the caveat that it won’t simply boil down to sources that are popular on YouTube.


He added that 10,000 human reviewers at Google — so-called search quality raters who monitor search results around the world — are helping determine what will count as authoritative sources and news stories.


Alexios Mantzarlis, a Poynter Institute faculty member who helped Facebook team up with fact-checkers (including The Associated Press), said the text story snippet at the top of search results was “cautiously a good step forward.”


But he worried what would happen to fake news videos that were simply recommended by YouTube’s recommendation engine and would appear in feeds without being searched.


He said it would be preferable if Google used people instead of algorithms to vet fake news.


“Facebook was reluctant to go down that path two and half years ago and then they did,” he said.


YouTube also said it will commit $25 million over the next several years to improving news on YouTube and tackling “emerging challenges” such as misinformation. That sum includes funding to help news organizations around the world build “sustainable video operations,” such as by training staff and improving production facilities. The money would not fund video creation.


The company is also testing ways to counter conspiracy videos with generally trusted sources such as Wikipedia and Encyclopedia Britannica. For common conspiracy subjects — what YouTube delicately calls “well-established historical and scientific topics that have often been subject to misinformation,” such as the moon landing and the 1995 Oklahoma City bombing — Google will add information from such third parties for users who search on these topics.