Well I did shoplift some grass seed from home depot. Realized as I was walking out that I forgot to ring it up...but I had a mask on so bandido time it was.
It is interesting to think about the right wing propagandists using the fact that in 2020 all the stores being so slow caused a drastic decrease in crimes like theft from shoplifting, and how them being open in 2021 will show a huge increase as the crimes get back to a more normal level. So even with crimes being down in 2021, they are going to be able to pretend like it is so much worse than it really is.
China is turning a major part of its internal Internet data surveillance network outward, mining Western social media, including Facebook and Twitter, to equip its government agencies, military and police with information on foreign targets, according to a Washington Post review of hundreds of Chinese bidding documents, contracts and company filings.
China maintains a countrywide network of government data surveillance services — called public opinion analysis software — that were developed over the past decade and are used domestically to warn officials of politically sensitive information online.
The software primarily targets China’s domestic Internet users and media, but a Washington Post review of bidding documents and contracts for over 300 Chinese government projects since the beginning of 2020 include orders for software designed to collect data on foreign targets from sources such as Twitter, Facebook and other Western social media.
The documents, publicly accessible through domestic government bidding platforms, also show that agencies including state media, propaganda departments, police, military and cyber regulators are purchasing new or more sophisticated systems to gather data.
These include a $320,000 Chinese state media software program that mines Twitter and Facebook to create a database of foreign journalists and academics; a $216,000 Beijing police intelligence program that analyses Western chatter on Hong Kong and Taiwan; and a Xinjiang cybercenter cataloguing Uyghur language content abroad.
“Now we can better understand the underground network of anti-China personnel,” said a Beijing-based analyst who works for a unit reporting to China’s Central Propaganda Department. The person, who spoke on the condition of anonymity to discuss their work, said they were once tasked with producing a data report on how negative content relating to Beijing’s senior leadership is spread on Twitter, including profiles of individual academics, politicians and journalists.
These surveillance dragnets are part of a wider drive by Beijing to refine its foreign propaganda efforts through big data and artificial intelligence.
They also form a network of warning systems designed to sound real-time alarms for trends that undermine Beijing’s interests.
“They are now reorienting part of that effort outward, and I think that’s frankly terrifying, looking at the sheer numbers and sheer scale that this has taken inside China,” said Mareike Ohlberg, a senior fellow at the German Marshall Fund who has conducted extensive research on China’s domestic public opinion network.
“It really shows that they now feel it’s their responsibility to defend China overseas and fight the public opinion war overseas,” she said.
'Public opinion guidance'
China’s systems for analyzing domestic public opinion online are a powerful but largely unseen pillar of President Xi Jinping’s program to modernize China’s propaganda apparatus and maintain control over the Internet.
The vast data collection and monitoring efforts give officials insight into public opinion, a challenge in a country that does not hold public elections or permit independent media.
The services also provide increasingly technical surveillance for China’s censorship apparatus. And most systems include alarm functions designed to alert officials and police to negative content in real time.
These operations are an important function of what Beijing calls “public opinion guidance work” — a policy of molding public sentiment in favor of the government through targeted propaganda and censorship.
The phrase first came to prominence in policymaking after the 1989 Tiananmen Square pro-democracy demonstrations, when officials began exploring new ways to preempt popular challenges to the Communist Party’s power, and has since become integral to the underlying architecture of China’s Internet, where users are linked by real name ID, and Internet services are required by law to maintain an internal censorship apparatus.
The exact scope of China’s government public opinion monitoring industry is unclear, but there have been some indications about its size in Chinese state media. In 2014, the state-backed newspaper China Daily said more than 2 million people were working as public opinion analysts.
In 2018, the People’s Daily, another official organ, said the government’s online opinion analysis industry was worth “tens of billions of yuan,” equivalent to billions of dollars, and was growing at a rate of 50 percent a year.
In 2020, the State Department reclassified the U.S.-based operations of China’s top state media outlets as foreign missions, increasing reporting requirements and restricting their visa allocations, angering Beijing.
The People’s Daily Online, a unit of the state newspaper the People’s Daily, which provides one of the country’s largest contract public opinion analysis services, won dozens of projects that include overseas social media data collection services for police, judicial authorities, Communist Party organizations and other clients.
The unit, which recorded $330 million in operating income in 2020, up 50 percent from 2018, says it serves over 200 government agencies, although it is not clear how many request foreign social media data.
In one tender won by the People’s Daily Online, the Beijing Police Intelligence Command Unit purchased a $30,570 service to trawl foreign social media and produce reports on unspecified “key personnel and organizations,” gathering information on their “basic circumstances, background and relationships.”
It also calls for weekly data reports on Hong Kong, Taiwan and U.S. relations. Issued shortly before the 2020 U.S. presidential election results were ratified on Jan. 6, it also called for “special reports” on “netizens’ main views” related to the election.
In June 2020, Twitter suspended 23,000 accounts that it said were linked to the Chinese Communist Party and covertly spreading propaganda to undermine pro-democracy protests in Hong Kong. This month, Twitter said it removed a further 2,048 accounts linked to Beijing and producing coordinated content undermining accusations of rights abuses in Xinjiang.
Experts say those accounts represent a small fraction of China’s efforts to boost pro-Beijing messaging on foreign social media.
'Extreme chilling effect'
Just under a third of the public opinion analysis systems reviewed by The Post were procured by Chinese police.
In 14 instances, the analysis systems included a feature requested by the police that would automatically flag “sensitive” content related to Uyghurs and other Chinese ethnic minorities. An additional 12 analysis systems included the police-requested capability of monitoring individual content authors over time.
“It must support information monitoring of overseas social media … and provide for targeted collection of designated sites and authors,” said one invitation to tender released by the Fuzhou city police in October that lists coverage of Facebook and Twitter as a requirement.
The monitoring of social media abroad by local police throughout China could be used in investigating Chinese citizens locally and abroad, as well as in flagging trends that stir domestic dissent, experts say.
“The public security monitoring is very much about stability maintenance, tracking people down and finding people’s identity, and when they monitor overseas social media, it’s also often with an eye to monitoring what news could cause trouble at home in China,” said the German Marshall Fund’s Ohlberg.
Companies providing overseas public opinion monitoring to police include a mix of private and state-owned firms, including the People’s Daily Online.
Six police contracts awarded since 2020 stated that the People’s Daily was chosen to conduct monitoring on the basis of its technical ability to gather data abroad.
PHILADELPHIA — They arrived in yellow rental trucks, unfurled their flags, and readied shields and smoke bombs. The hour was late, and the symbolism was unsettling: As the clock inched close to midnight on July 3, about 200 members of the white nationalist group Patriot Front marched through downtown Philadelphia, past Independence Hall and other historic landmarks, while chanting, “Take America back!”
If the demonstration was meant to be a show of strength for the organization, it ended meekly. After scuffling with a handful of counterprotesters, the Patriot Front members retreated into their Penske trucks and then were stopped by Philadelphia police on Delaware Avenue, where some marchers sat dejectedly, their heads bowed.
But the episode served a dual purpose. Social media has proven to be fertile ground for white supremacist and conspiracy theory movements trying to attract new members. Patriot Front turned footage of its parade through the city into a hype video; on its website, its members likened themselves to Revolutionary War heroes, and insisted, “Americans must dictate America.”
A month before the Philadelphia demonstration, more than 300 researchers and scholars had volunteered to be part of a new effort to curb the spread of extremism: the Collaboratory Against Hate, a center created by the University of Pittsburgh and Carnegie Mellon University.
The project is well-timed. More than 8,000 hate crimes were reported in the United States in 2020, the highest total in more than a decade, according to the FBI. In Philadelphia, 63 people were reported as victims of hate crimes, a 320% increase from 2019, when there were 15 victims. Statewide, the number of victims doubled in 2020 to 110, but that’s likely a low estimate; of Pennsylvania’s 1,504 law enforcement agencies, just 734 supplied data to the FBI.
“We’re pushing back at something I hear a lot, which is ‘Well, people are always going to hate some people. Hate is human nature, and we’re stuck with it, and there’s always going to be hate groups,’” said Kathleen Blee, the Collaboratory’s co-director.
“But it’s not like everybody is racist, and we’re doomed to this. … There is a deeper, more entrenched, more destructive side of extremism and political conspiracy that is not part of human nature. It is deliberately constructed, like antisemitism is deliberately constructed in these groups. We just need to apply our tools to get past that. And make that impossible.”
‘Into the rabbit hole’
Collaboratory researchers are first digging into three areas — digital-content moderation, extremism in the military, and youth and extremism. The last, Blee said, is a newer trend she finds particularly worrisome.
White supremacists are using online video-game communities and streaming platforms to approach and recruit teenagers and middle schoolers. Some entreaties start with jokes, or by introducing white supremacist phrases to pique a child’s interest; others share links to material that lure young gamers deeper.
“That’s one of the most disturbing, and one of the least known, phenomena,” said Blee, who has studied white supremacy since the 1980s. “We’re just starting to get a handle on it. For some children, that’s probably an approach that pulls them into the rabbit hole.”
White supremacist narratives rely on a combination of elements to hook recruits: the offer of a collective identity, revelations about sinister conspiracies, and the promise of righting perceived grievances.
“Getting into that mind-set, and into that world, virtually or in real life, can be such a fundamentally self-altering experience that it really can be quite difficult to firmly pull yourself out,” Blee said.
Her concerns about hate groups recruiting minors were borne out by a 2021 survey conducted by the Anti-Defamation League, which found that 10% of gamers aged 13 to 17 had encountered white supremacist ideology while participating in online multiplayer games, and 60% had experienced some form of harassment.
“They hear people talk about the superiority of the white race, and the desire of a white homeland,” said Daniel Kelley, the associate director of the ADL’s Center for Technology and Society.
Kelley theorizes that large numbers of white supremacists aren’t flocking to online games to recruit new members; instead, savvy ones have just learned to exploit a realm that is inherently vulnerable.
In the wake of the Jan. 6 attack on the U.S. Capitol, Facebook and Twitter banned the accounts of former President Donald Trump, and thousands affiliated with far-right extremists, white nationalists, and the conspiracy movement QAnon. And Facebook faced congressional scrutiny when a whistle-blower accused the company of allowing misinformation to spread.
But Kelley said there’s little oversight of online gaming ecosystems, or “gaming adjacent” platforms like Twitch, Steam, and Discord, with some companies flatly opposed to content moderation.
Instead, they’ve become environments where hate speech “can be normalized in a dangerous way.”
The majority of parents who responded to the ADL’s survey said they don’t set security controls on games their kids use, and most teens don’t tell their parents about uncomfortable exchanges they’ve had, which can include stalking, intimidation, and threats of harm.
Kelley argues it’s a mistake to draw a distinction between online and offline extremism.
“There’s a tendency to call offline life ‘real life,’” he said. “But every time there’s an interaction in a digital space, there’s a real person behind that screen.”
And even one person who espouses hatred and violent fantasies online can cause unimaginable harm.
A growing threat
During her decades of research into white supremacist movements, Blee detected a pattern. “It can appear that it’s strengthening and weakening, but often what it’s really doing is crossing the line of public visibility,” she said. “It sinks below the line, and rises above the line.”
In the 1980s, many hate groups found themselves marginalized. They grew bolder in the 1990s — national preparedness expos became a big-tent gathering spot for militia groups and conspiracy theorists, and in 1995, an antigovernment terrorist who described white nationalists as his “brother in arms” blew up a federal building in Oklahoma City, killing 168 people, including 19 children.
Extremist movements multiplied at a dizzying pace in recent years, and the COVID-19 pandemic and the Jan. 6 insurrection have only energized those who want to see the U.S. government overthrown. More than 700 participants in the Capitol attack have faced federal prosecution.
And in November, a federal jury ordered more than a dozen white nationalist leaders to pay $26 million in damages over the violence that erupted at a 2017 “Unite the Right” rally in Charlottesville, Va., which left one counterprotester dead. (Patriot Front, which marched in Philadelphia and distributed propaganda on local college campuses, was formed by some members of a neo-Nazi group that took part in the Virginia rally.)
“On the extremist side, we still see a momentum,” Blee said, “not a demobilization.”
In November, the Department of Homeland Security warned that “racially or ethnically motivated violent extremists” pose a continuing threat to the nation, and might use pandemic-related health restrictions as a reason to target government officials.
Months earlier, ABC News reported that FBI agents in San Antonio concluded in a confidential intelligence assessment that white supremacists were seeking to infiltrate law enforcement and the military to “prepare for and initiate a collapse of society” and harm racial and ethnic minorities.
The Collaboratory Against Hate hopes to shape the data and other insights its researchers assemble into tools that can blunt the growth of extremism.
That mission grows more urgent by the day.
Two days after Christmas, a 47-year-old man named Lyndon McLeod zigzagged across Denver, shot a police officer, and murdered five people at a pair of tattoo shops and a hotel. McLeod was killed by the same officer he had wounded.
McLeod left behind a digital footprint that led, unsurprisingly, to dark obsessions and familiar, twisted ideologies. He wrote books under a pen name that, according to Newsweek, included fantasies about committing a mass shooting, and regurgitated white supremacists’ hatred for Black and Muslim people.
On Twitter, he pined for “retributive violence” and complained that aggressive white males have become irrelevant.
“War,” he wrote, ”is coming.”
Arizona Republican State Representative Jake Hoffman made news this week when it was revealed he signed a forged "certification" falsely stating Donald Trump, not Joe Biden, won his state's Electoral College electors.
Video that's gone viral of Hoffman shows him defending signing the forged documents in which he falsely identifies himself as a duly elected elector for Trump.
His defense: "in unprecedented times, unprecedented action does occur." He goes on to claim, "there is no case law, there is no precedent that exists as to whether or not an election that is currently being litigated in the courts has due standing."
He called the forged electoral documents "dueling opinions" in a video.
The video has been viewed over 800,000 times in just 14 hours.
Hoffman, it turns out, was banned from Twitter after his company, Rally Forge, worked with Charlie Kirk's far right wing political activist group, Turning Point USA during the 2020 election, establishing "a domestic 'troll farm' in Phoenix, Arizona, that employed teenagers to churn out pro-Trump social media posts, some of which cast doubt on the integrity of the US election system or falsely charged Democrats with attempting to steal the election, the Washington Post revealed," according to The Guardian.
The Washington Post also reported that "the posts are the product of a sprawling yet secretive campaign that experts say evades the guardrails put in place by social media companies to limit online disinformation of the sort used by Russia during the 2016 campaign."
Some of those teens, The Post noted, were minors.
That sounds like a Federal felony, but I don’t know which one.
Donald Trump's efforts to push his "big lie" of election fraud are receiving powerful help from one billionaire donor, The Daily Beast reported Thursday.
"Among the ranks of “dark money” groups and anonymous megadonorswho bankrolled the effort is a familiar name in GOP fundraising circles: Dick Uihlein, founder of the multinational Uline shipping company," The Beast reported. "According to previously unreported tax disclosures, Uihlein’s nonprofit—the Ed Uihlein Family Foundation—poured millions of dollars in 2020 into a sprawling number of groups connected to efforts to challenge Joe Biden’s victory and reimagine election law, as well as other right-wing extremist organizations, including ones designated as hate groups."
The Beast noted the all of the foundations $16.8 million in donations in 2020 came from Uihlein.
Kyle Herrig, president of Accountable.US, blasted Uihlein for the donations.
“In 2020, as workers and families struggled to get by, Dick and Liz Uihlein’s company cashed in on pandemic aid—then turned around and funded hate groups pushing COVID conspiracy theories, bigotry, and efforts to undermine democracy,” he said. “By signing away more than $1 million to groups that have promoted hate and sedition, Dick and Liz Uihlein have made it clear where their company’s values truly lie.”
The Beast noted Uihlein gave $1.25 million to the Conservative Partnership Institute, where Cleta Mitchell served as a senior legal fellow.
"Mitchell, a veteran GOP operative, helped construct the campaign’s post-election legal strategy mostly behind the scenes. But she drew national attention in early January 2021 after she featured heavily in a taped phone call between then-President Donald Trump, his Chief of Staff Mark Meadows, and Georgia’s top election officials. Trump pressured the election officials in that now infamous call to 'find' enough votes for him to win Georgia. (Meadows joined Mitchell at CPI after he left the White House in January.," The Beast reported.
Uihlein also contributed to the Federalist Society, the Texas Public Policy Forum, and the Center for Security Policy, which has been designated a hate group by the Southern Poverty Law Center.
"Uihlien—whose net worth Bloomberg pegs at about $4 billion—also funded right-wing media outlets that pushed false narratives about the 2020 election. For instance, he donated $750,000 to the FDRLST, which pushed misleading claims of voter fraud. He also slipped $25,000 to the American Conservative, which published a number of articles claiming that Democrats had stolen the election, including a debunked article the evening of Jan. 6 alleging widespread fraud," The Beast reported. "Uihlein also threw a $25,000 bone to conservative watchdog Judicial Watch, run by conspiracy theorist Tom Fitton. That organization also challenged the election results."
Read the full report.
Not content with panicking about the teaching of critical race theory, conservative parents throughout America have now been gripped with fear about their schools trying to accommodate students who dress up in animal costumes.
The Daily Beast reports that in "Pennsylvania, Maine, Michigan, and Iowa in recent months, school board meetings have been disrupted by allegations that educators are giving special treatment to furry students."
The first notable instance of furry panic occurred when right-wing activists pushed a false claim about schools in Michigan placing litter boxes in bathrooms to allow furry students to use them, and the bogus rumors about furry infiltration into the public education system have only grown from there.
In York County, Pennsylvania, for example, Facebook rumors began swirling around among conservative locals warning that furries "could be in your child’s classroom hissing at your child and licking themselves."
Patch O’Furr, proprietor of the furry news site Dogpatch Press, tells The Daily Beast that many of the same people spreading bogus furry rumors are the same people agitating to ban books they don't like.
“It’s culture war, it’s control, and it’s not about protecting kids,” O’Furr explains. “If you actually look at who’s doing this, at some of the political groups getting involved, they’re all far right.”
The authoritarian right has mounted a devious, three-prong strategy to control debate, indoctrinate the young and eradicate adherence to objective reality. The tactics are central to a movement that wants to protect the status quo and solidify White dominance over American politics.
First, the right created its own media silo in which lies, propaganda and kooky conspiracy theories replace science, facts and critical thinking. This media environment promotes covid denialism and boosts anti-vaccine sentiments as a means of fighting “elites.” Such patently false dogma is the ultimate assertion of cult-thinking over reality — that doctrine is more important than one’s own life. And, if you fall for that, you’ll fall for anything (e.g., immigrants are criminals, Jan. 6 was a peaceful protest).
Second, the right has rigorously demonized complaints about racism and demands for racial inclusivity as “wokeness” or “cancel culture.” It has targeted private entities that dare to oppose its agenda, such as publishers who cancel book deals with right-wing figures or social media companies that jettison purveyors of misinformation from their platform. Screaming about “wokeness” seeks to disable the strength of social opprobrium so that right-wingers can go on offending and demeaning others. This is often couched as “academic freedom” or “free expression,” mistakes private shaming for state action, treats boycotts of offensive material as “censorship,” and insists private institutions reject commitments to intellectual integrity and equality.
Finally, the right has returned to old-fashioned book banning and state edicts that require education to be inoffensive to members of their own race or religion. Edited history, bastardized science and conformity to a false, comforting narrative make it impossible for a critical, fact-based reckoning with the past.
Put it all together, and it becomes clear that the MAGA movement is besieged and frightened. It knows that it can no longer dominate public institutions and monopolize political power. So instead, it seeks to bamboozle the public, destroy objective reality and disable the means by which those in power are held accountable.
What can a free society that depends on a shared set of facts and multiracial, multireligious democracy do to defend itself?
First, it can counter educational arson by making speech more accessible. School boards want to ban “Maus”? Buy a copy of the book for every schoolchild in that district. Right-wing crusaders want to excise the Rev. Martin Luther King Jr. from the curriculum? Bring in pop culture icons to lead public discussions and provide a complete picture of America’s struggle for racial equality. And, as President Biden did with the Tulsa race massacre, he can use his bully pulpit, public celebrations and monuments to share the history the right would rather bury. (It wouldn’t hurt for him to denounce book banning.)
Second, refuse to normalize lies. Don’t give Jan. 6 apologists and vaccine deniers a free pass on mainstream media. Do not treat the right’s campaign of vicious lies as a function of horserace politics. Be clear about who is doing the censoring. (Media reports that cover the spread of book banning without saying who is banning them disguise the responsible players and suggest the phenomenon is not tied to a political agenda.)
Third, private actors (e.g., book publishers, universities, social media platforms) should reiterate their standards. Not every utterance by a professor warrants a firing, but neither should egregious (let alone repeated) bigotry go unnoticed. Suspension from media platforms should precede expulsion. A more nuanced response to vile speech will reduce cries of victimhood.
Finally, the right’s successful engagement in local politics must be matched by those committed to a free, democratic society. Run for school board. Petition local leaders. Organize rallies and engage in the free marketplace of ideas. The best solution to bad speech is, and always has been, more good speech.
Too many well-meaning pundits have been distracted by the campaign against “wokeness.” Let’s not lose track of the ominous political movement pushing intellectual nihilism and enlisting the state in the service of an authoritarian agenda. That is the principal threat to our democracy.
Jhujar Singh’s marriage was straight out of a dream.
Three years ago, the resident of Punjab in northern India was 24 when he was introduced to his wife who was 18 then, just about the marriageable age according to the Indian law. And though the arranged marriage set-up meant the two had met just a couple of times before their wedding, he had gradually fallen in love with her.
So, one day, when his wife spoke with him about her dream of relocating to Canada – the far-off country which immigration-obsessed Punjab looks at as almost their home base – he gave it serious thought.
Jhujar knew that his wife had excelled at academics, getting far better grades at school than he ever had, and had already started preparing for the International English Language Testing System (IELTS) exams – an English language proficiency test widely regarded as an important tool for global migration.
So, he happily funded her Canadian voyage with the hope that once she was settled there, it would pave the way for his move too. He told VICE that he spent nearly Rs 1.8 million ($24,000) on her tuition for IELTS, clothes, tickets, visa fees and rent.
“For the first two months, we were constantly in touch,” Jhujar told VICE. “We would chat on video call almost every day.”
And then, she ghosted him, blocking his number and changing her address in Canada.
Jhujar’s situation is not unique. And now, he and around 80 other men have come together to form a WhatsApp group named “Thugiya de Peedit” (Victims of Fraud). They all share a similar story: The husband and his family fund the woman’s education in Canada with the hope that she will get residency there, and then, be able to take her partner along. But within a few months, the wife goes off the grid, which sometimes also culminates in the discovery that she has a new partner in Canada.
The state of Punjab traditionally records a high efflux of its citizens to Canada. The Punjabi population in Canada is both statistically and socially strong, their influence reflecting in everything, from road signs being displayed in Punjabi (it’s Canada’s third most spoken language) to a record number of members in the parliament (even more than in the Indian parliament).
But life in Canada for the women who might’ve “abandoned” their Indian husbands would look vastly different from the one they left in India.
“For Punjabi women in the villages, there is absolutely no freedom,” said Satwinder Kaur, the founder of the non-profit Abandoned Brides by NRI Husbands Internationally (ABBNHI), which helps women — and now, even men — who’ve been abandoned by spouses living abroad. “No parent can claim to give their daughter absolute freedom in Punjab. So, when these smart women – often in their early 20s – go to Canada, what are the chances they will want to fall back into a life they never wanted to begin with?”
Sometimes, couples expressly marry so that they could realise their individual goals of migrating to Canada. This is often referred to as a “contract marriage” even though Ravinder Kaur, a professor of sociology at the Indian Institute of Technology in Delhi, does not agree with the term. “For the longest time, men have been abandoning their wives. So, the newfound sympathy that men seem to be eliciting seems misplaced.”
The Ministry of External Affairs recently informed the parliament that it received 4,698 complaints from wives abandoned by their husbands between January 2016 and May 2019. There is no official data on the number of men abandoned by their wives. VICE tried reaching out to some of the women accused by their husbands of abandonment but did not receive a response from any of them.
(too long to post entire story)
Extremists targeted my 12-year-old son online.
He was playing a virtual game with friends over the summer when another child let a user into the group who they had not played with before. That account then ushered in other users, and several days later they launched a toxic tirade of harassment and flooded the chat with anti-Semitic vitriol, swastikas and neo-Nazi propaganda.
When my son pushed back, they bombarded him with aggressive, hateful messages. As soon as we blocked and reported one abusive account, another disturbing message would appear within seconds in a seemingly coordinated attack.
My son and I had previously discussed what to do if he was ever targeted online, or witnessed harassment, and we were able to respond quickly, but his experience is not unusual.
Hate speech and online abuse have been pervasive in digital spaces for many years, but the use of gaming and messaging platforms by extremists and the alt-right to target younger users is increasing as more children play online. A 2017 Pew study found that 90 percent of teens now use gaming platforms; and a 2019 survey from Common Sense Media found that 64 percent of tweens 8 to 12 years old play online games.
“Extremists are moving more and more into gaming spaces and targeting a young audience,” said Mark Potok, an expert in domestic hate groups and former senior fellow at the Southern Poverty Law Center. “This kind of access is what they have wanted for years.”
Virtual hate speech also increased during the pandemic as online activity soared, according to a report issued by L1GHT, a technology company that identifies toxic speech online.
“It’s not men in white hoods on the street anymore,” said Laura Guy, a clinical social worker in New York City who works with children who have been targeted online. “They don’t always begin with overtly hateful language. Oftentimes, they try to engage youth with edgy, dark humor and provocative jokes.”
Caregivers can use privacy settings as a first line of defense against online harassment or recruiting, but extremists find workarounds to gain access to children. For instance, they create misleading or fake accounts to lure children and their friends into accepting friend requests, or to join their games.
In Discord — a popular messaging platform where gamers can chat while playing — extremists have espoused hate and createdservers glorifying Nazis. Users can organize “raiding parties” that encourage their members to barrage another server with hateful messages.
Children can unknowingly let extremists in if they post their server link on Disboard (a site not owned by Discord where people can search for Discord servers). Extremists can use that posted link to infiltrate.
Hate groups frequently use video games to recruit members, but they have also become a prime space to harass children. “If you are not one of them, you are an enemy, and they enjoy trying to make people miserable” Mr. Potok said.
A 2020 Anti-Defamation League survey found that 68 percent of online gamers experienced severe harassment. Fifty-three percent of respondents said they were harassed based on “race/ethnicity, religion, ability status, gender or sexual orientation”; and 51 percent received threats of violence.
Mr. Potok said that online abuse is a problem all children can experience, but marginalized groups are at particular risk.
Lydia Elle, an African American business owner and writer in California, said her 11-year-old daughter began playing the online game Roblox during the pandemic to connect with school friends. Her daughter put an avatar of an African American girl on her profile and was quickly targeted, her mother said. “Virulent racists quickly honed in on her and called her horrible names,” Ms. Elle said.
Some platforms report they are taking steps to address extremism, such as using artificial intelligence to detect offensive content and increasing moderation, but many users say it has taken far too long for tech companies to address the harassment, and that it has not declined.
Representatives for Discord and Roblox each said that their platforms have zero tolerance for hate speech and “violent extremism.”
Discord uses “a mix of proactive and reactive tools to keep activity that violates our policies off the service,” the company said in a statement. These include automated search tools like PhotoDNA and ways for users to report violations. Roblox reported that it uses a “combination of machine learning and a team of over 3,000” people to detect inappropriate content.
Lori Getz, an internet safety expert and the author of “The Tech Savvy User’s Guide to the Digital World,” said that caregivers can’t control everything children are exposed to, but parents can empower children to handle difficult situations online. Here’s how:
Start the conversation early.
Talk with children in age-appropriate ways about hate — including overt and covert signs, such as words, symbols and images — and trusting their instincts if something doesn’t seem right. “If caregivers don’t talk with their children about these things, someone else will, and it may not be a credible source,” Ms. Guy said.
If children are harassed online, ensure they have support, said Robyn Silverman, a child and teen development specialist. Online abuse should be taken just as seriously as other types of abuse, she said, noting that children and teens who are targeted “can experience anxiety, depression, difficulty sleeping, stomach aches and other physical symptoms from cyber abuse.”
Maintaining an ongoing, open dialogue about online safety is crucial. Even if children are not allowed to play certain games at home, they may be exposed to them in other places. A British survey of 20,000 children ages 11-18 reported that 57 percent said they have accounts that “adults don’t know about.”
Children may withhold information from caregivers, especially if they are targeted online, out of fear of losing their games, Dr. Silverman said. “Share with your children that they won’t be in trouble if they come to you about this,” she suggested. “Let them know that you are there to support them.”
Check content and review settings.
Review online content and the accounts that your children interact with, as well as privacy settings and parental controls. Be transparent so your children know you will be checking.
Ms. Getz recommended that caregivers check game ratings. Online platforms for children under 13 have stricter privacy requirements under the federal Children’s Online Privacy Protection Rule than platforms that target older users.
Make a plan ahead of time.
Ensure that your child knows what to do if they are targeted. To start with, they should tell a trusted adult who can provide support. It is important to screenshot the comments, block offensive users, leave the game or chat, and report abusive accounts.
Reporting procedures vary by platforms — review with your child how to submit a report before a problem arises.
Reporting may feel futile when platforms are slow to respond, but experts say it matters. “Reporting helps children feel empowered
that they did something,” Ms. Getz said. “Choosing not to report also means hateful accounts have little chance of being flagged.”
Caregivers should report threats of violence to law enforcement.
Encourage your child to speak up
Kids who witness online abuse can help by making it clear that they will not be a bystander to hate, Ms. Getz said.
Ms. Elle’s daughter and her friends have a plan — if one of them is attacked, they all screenshot the comments and report the account. “Standing up to hate doesn’t just fall on the person being targeted,” Ms. Elle said. “Being an ally can really make a difference when someone is targeted.”
Calling out hate is important, Ms. Getz said, but engaging in a toxic, ongoing exchange can be traumatizing for children and gives extremists more attention.
Ms. Getz recommended that when children and adolescents witness online hate they should reply with one clearly worded response: “Let them know that what they are doing is unacceptable and you will not be a part of it — and then disengage and report the account.”
Q (before handing it off to Ron Watkins) 2017-2018:According to a report from the New York Times, two teams of "linguistic detectives" have arrived at the same conclusion pinpointing the founders of the conspiracy theory cult QAnon.
Noting that the QAnon phenomenon appears to have been kicked off on a popular online message board in 2017 with an ominous post reading "Open your eyes. Many in our govt worship Satan,” the report adds that Paul Furber, a South African software developer became the first "apostle" on the cult that later exploded and helped pave the way, in the long run, to the Capitol riot on Jan 6th.
As the Times reports, questions about who "Q" is has long bedeviled journalists, investigators and the public at large and now it appears that the answer is close at hand.
According to the Times' David Kirkpatrick, "... two teams of forensic linguists say their analysis of the Q texts show that Mr. Furber, one of the first online commentators to call attention to the earliest messages, actually played the lead role in writing them."
RELATED: Man says he killed his wife after QAnon believers told him she was a CIA plant
"Sleuths hunting for the writer behind Q have increasingly overlooked Mr. Furber and focused their speculation on another QAnon booster: Ron Watkins, who operated a website where the Q messages began appearing in 2018 and is now running for Congress in Arizona," the report adds. "And the scientists say they found evidence to back up those suspicions as well. Mr. Watkins appears to have taken over from Mr. Furber at the beginning of 2018. Both deny writing as Q."
Adding, "The studies provide the first empirical evidence of who invented the toxic QAnon myth," Kirkpatrick goes on to explain, "Computer scientists use machine learning to compare subtle patterns in texts that a casual reader could not detect. QAnon believers attribute this 2017 message to an anonymous military insider known as Q."
"The two analyses — one by Claude-Alain Roten and Lionel Pousaz of OrphAnalytics, a Swiss start-up; the other by the French computational linguists Florian Cafiero and Jean-Baptiste Camps — built on long-established forms of forensic linguistics that can detect telltale variations, revealing the same hand in two texts," the report continues.
According to mathematician Patrick Juola of Duquesne University, he thinks the two teams nailed it.
“What’s really powerful is the fact that both of the two independent analyses showed the same overall pattern,” he explained.
You can read in-depth details describing how they arrived at their conclusions here.
now it's time to make every detail of both of these assholes lives morning news...they shouldn't be able to fart without it being reported, along with the aroma and how many decibels it was...they wanted to start shit, let's give them shit....https://www.rawstory.com/qanon-2656729598/View attachment 5088715
Q (before handing it off to Ron Watkins) 2017-2018:
View attachment 5088694
Just another scrub.https://www.rawstory.com/qanon-2656729598/View attachment 5088715
Q (before handing it off to Ron Watkins) 2017-2018:
View attachment 5088694
According to a report from Axios, Donald Trump's new social media platform that has gotten off to a ragged start is creating fake accounts for major media organizations that cover sports in order to build up traffic and create a feeling of "legitimacy."
The report notes Truth Social has gone ahead and allowed extreme right-wing white nationalist Nick Fuentes set up an account, based on screenshots provided to Axios, and that accounts for ESPN and the NFL have also popped up -- but they appear to be "bot" accounts.
With Sara Fischer of Axios writing, the accounts "suggest the platform is trying to foster legitimacy," she added, "Truth Social displays accounts for various brands, including @NFL, @FoxSports, @ESPN and others. Sources confirm that none of those accounts were set up by the entities they claim to represent, although they are set up to look like they are real brand accounts, via links and logos.
RELATED: 'I'm already being censored!' Right-wingers slam Trump’s Truth Social for banning accounts
According to Fischer, "There are 'BOT' labels on those accounts, which may suggest the Truth Social accounts are reposting content those brands have published on other social media sites."
You can read more details about doings at Truth Social here.
Nothing too sleazy for Trump! Is anybody surprised? He is about to be cut off from any Russian support and his relationship with Russia is under new scrutiny. His golf courses are Russian financed and these sanctions and other measures could be trouble for him. He can't find a bank or an accounting firm either.
WASHINGTON (AP) — As Russia’s war in Ukraine plays out for the world on social media, big tech platforms are moving to restrict Russian state media from using their platforms to spread propaganda and misinformation.
Google announced Tuesday that it’s blocking the YouTube channels of those outlets in Europe “effective immediately” but acknowledged “it’ll take time for our systems to fully ramp up.”
Other U.S.-owned tech companies have offered more modest changes so far: limiting the Kremlin’s reach, labeling more of this content so that people know it originated with the Russian government, and cutting Russian state organs off from whatever ad revenue they were previously making.
The changes are a careful balancing act intended to slow the Kremlin from pumping propaganda into social media feeds without angering Russian officials to the point that they yank their citizens’ access to platforms during a crucial time of war, said Katie Harbath, a former public policy director for Facebook.
“They’re trying to walk this very fine line; they’re doing this dance,” said Harbath, who now serves as director of technology and democracy at the International Republican Institute. “We want to stand up to Russia, but we also don’t want to get shut down in the country. How far can we push this?”
Meta, which owns Facebook and Instagram, announced Monday that it would restrict access to Russia’s RT and Sputnik services in Europe, following a statement by European Union President Ursula von der Leyen over the weekend that officials are working to bar the sites throughout the EU.
Google followed Tuesday with a European ban of those two outlets on YouTube.
The U.S. has not taken similar action or applied sanctions to Russian state media, leaving the American-owned tech companies to wrestle with how to blunt the Kremlin’s reach on their own.
The results have been mixed.
RT and other Russian-state media accounts are still active on Facebook in the U.S. Twitter announced Monday that after seeing more than 45,000 tweets daily from users sharing Russian state-affiliated media links in recent days, it will add labels to content from the Kremlin’s websites. The company also said it would not recommend or direct users to Russian-affiliated websites in its search function.
Over the weekend, the Menlo Park, California-based company announced it was banning ads from Russian state media and had removed a network of 40 fake accounts, pages and groups that published pro-Russian talking points. The network used fictitious persons posing as journalists and experts, but didn’t have much of an audience.
Facebook began labeling state-controlled media outlets in 2020.
Meanwhile, Microsoft announced it wouldn’t display content or ads from RT and Sputnik, or include RT’s apps in its app store. And Google’s YouTube restricted Russian-state media from monetizing the site through ads, although the outlets are still uploading videos every few minutes on the site.
By comparison, the hands-off approach taken by TikTok, a Chinese platform popular in the U.S. for short, funny videos, has allowed pro-Russian propaganda to flourish on its site. The company did not respond to messages seeking comment.
One recent video posted to RT’s TikTok channel features a clip of Steve Bannon, a former top adviser to ex-President Donald Trump who now hosts a podcast with a penchant for misinformation and conspiracy theories.
“Ukraine isn’t even a country. It’s kind of a concept,” Bannon said in the clip, echoing a claim by Russian President Vladimir Putin. “So when we talk about sovereignty and self-determination it’s just a corrupt area where the Clintons have turned into a colony where they can steal money.”
Already, Facebook’s efforts to limit Russian state media’s reach have drawn ire from Russian officials. Last week, Meta officials said they had rebuffed Russia’s request to stop fact-checking or labeling posts made by Russian state media. Kremlin officials responded by restricting access to Facebook.
The company has also denied requests from Ukrainian officials who have asked Meta to remove access to its platforms in Russia. The move would prevent everyday Russians from using the platforms to learn about the war, voice their views or organize protests, according to Nick Clegg, recently named the company’s vice president of global affairs
“We believe turning off our services would silence important expression at a crucial time,” Clegg wrote on Twitter Sunday.
More aggressive labeling of state media and moves to de-emphasize their content online might help reduce the spread of harmful material without cutting off a key information source, said Alexandra Givens, CEO of the Center for Democracy and Technology, a Washington-based non-profit.
“These platforms are a way for dissidents to organize and push back,” Givens said. “The clearest indication of that is the regime has been trying to shut down access to Facebook and Twitter.”
Russia has spent years creating its sprawling propaganda apparatus, which boasts dozens of sites that target millions of people in different languages. That preparation is making it hard for any tech company to mount a rapid response, said Graham Shellenberger at Miburo Solutions, a firm that tracks misinformation and influence campaigns.
“This is a system that has been built over 10 years, especially when it comes to Ukraine,” Shellenberger said. “They’ve created the channels, they’ve created the messengers. And all the sudden now, we’re starting to take action against it.”
Redfish, a Facebook page that is labeled as Russian-state controlled media, has built up a mostly U.S. and liberal-leaning audience of more than 800,000 followers over the years.
The page has in recent days posted anti-U.S. sentiment and sought to down play Russian’s invasion of Ukraine, calling it a “military operation” and dedicating multiple posts to highlighting anti-war protests across Russia.
One Facebook post also used a picture of a map to highlight airstrikes in other parts of the world.
“Don’t let the mainstream media’s Eurocentrism dictate your moral support for victims of war,” the post read.
Last week, U.S. Sen. Mark Warner of Virginia sent letters to Google, Meta, Reddit, Telegram, TikTok and Twitter urging them to curb such Russian influence campaigns on their websites.
“In addition to Russia’s established use of influence operations as a tool of strategic influence, information warfare constitutes an integral part of Russian military doctrine,” Warner wrote.