Connect with us

Technology

Substack: the future of news – or a media pyramid scheme? | Media

Voice Of EU

Published

on

Since launching in 2017, Substack has been touting itself as a “better future for news.” Their offering was simple: email newsletters with an option for subscribers to pay monthly fees for content – like Netflix for newsletters.

If you have something to write and a list of emails of people who want to read it, the thinking goes, there is nothing stopping you from making a living on your own. With a healthy Substack email list, freelancers are no longer beholden to flakey editors; staff reporters no longer have to be insecure about layoffs; small media companies no longer anxious about a tweak to an algorithm that would send them into oblivion.

All that the company asks for in return? A 10% cut of subscription dollars.

Substack’s vision is proving enticing. In the past 12 months, several high-profile journalists and writers have left jobs to go it alone with Substack: the New York Times’ Charlie Warzel, Vox’s Matthew Yglesias, New York Magazine’s Heather Havrilesky.

The number of poets, essayists, hobbyists, cooks, advice-givers, spiritual guides who charge a modest amount for their newsletters is growing. In a year when US media lost thousands of newsroom jobs, the company emerged as a seemingly viable alternative for journalists and writers to earn money. But then, over the past months, several revelations about Substack’s policies have led many to question whether it ought to be entrusted with crafting a vision for the future of news.

The controversy began in response to reports that the company was luring writers to the platform through a program called Substack Pro, which offered lump sums of money – as much as $250,000 – for writers to leave their jobs and take up newsletter writing. Some writers were also offered access to editors, health insurance, and a legal defender program.

On the face of it, Substack Pro was simply offering writers the benefits that usually come with full-time employment. But the program was seen as controversial for a number of reasons.

To begin, the cohort of writers selected by the company remained undisclosed. This created an invisible tiered system dividing those who were actively supported, and those who were taking a risk in trying to build their own subscriber base.

According to journalist Annalee Newitz, this made Substack into something of a pyramid scheme. Some anonymous writers were destined to succeed while the vast majority were providing Substack with free content, hoping to one day be able to monetize. As New York Times columnist Ben Smith put it, Substack was surreptitiously making some writers rich and turning others into “the content-creation equivalent of Uber drivers.”

The second and perhaps more fundamental problem with Substack Pro was that it contravened the company’s claims to editorial neutrality. Since launching, Substack has insisted that it is not a media company but a software company that builds tools to help writers publish newsletters, the content of which was none of their business —like a printing press for the digital age. This differentiated the company from social media platforms, which organize content algorithmically to increase engagement, and media companies, which make active editorial decisions about what they publish.

In reality, though, Substack was doing both. They were using metrics from Twitter to identify writers with a proven ability to draw attention to themselves, and then actively poaching them. Substack’s founders, a journalist and two developers, said they wanted to provide an alternative to the instability of digital media companies and the toxicity of social media platforms. And yet, the company was actively choosing writers who had come to prominence through those channels.

Substack was, in other words, skimming the fat off the top of what they called a toxic media environment all while claiming to offer an alternative. In the process, the company inherited some of digital media’s most trenchant issues. After it was revealed that Substack Pro had signed controversial writers Glenn Greenwald and Jesse Singal, a number of Substack writers voiced their opposition. Substack tried to avoid accountability for their selections by maintaining a veneer of neutrality, claiming to merely be a platform not a publisher. They were trying to have their media cake and eat it, too.

The revelations about Substack Pro led to a broader conversation about the company’s content moderation policies. At the very end of last year the company clarified their position: no porn. No spam. No doxxing or harassment. No attacks on people based on race, ethnicity, national origin, religion, sex, gender, sexual orientation, age, disability, medical condition. But the company also took the opportunity to assert their commitment to free speech. “We believe dissent and debate is important,” co-founder Hamish McKenzie wrote. “We celebrate nonconformity.”

Some saw this a welcoming invitation in what they perceive as an increasingly “woke” media landscape. Dana Loesch, the former NRA spokesperson, moved her newsletter from Mailchimp to Substack, claiming that the former “deplatforms conservatives.” Writer Andrew Sullivan, who has been criticized for his views on race and IQ, moved his column from New York Magazine over to the newsletter format.

For others, though, Substack’s position on content moderation was alienating, demonstrating that the company had little interest in actively addressing some of the thorny questions about how to host healthy media communities online. Many have decided to leave and take their newsletters, and their email lists, elsewhere.

Of course, Substack Pro represents only a very small proportion of people using the platform to write. Most write brief letters for micro-communities from whom they ask for no payment. There is an intimacy in the newsletter format that is not available on social media. I love receiving the poet and essayist Anne Boyer’s meditations in my inbox every now and then. Likewise the occasional musings and book recommendations from writer and critic Joanne McNeil.

Substack does have an interest in helping these smaller-scale writers level up to taking payment from subscribers, though. Every dollar earned by a writer on the platform contributes to their revenue. For this reason, they have offered no-strings-attached grants, between $500 and $5,000 in cash, to help writers take more time to commit to building an audience.

The concept of creators earning money directly from a cohort of followers is certainly not new; Patreon, OnlyFans, Cameo, Clubhouse all work from a similar paradigm. Digital media might be moving away from a model where creators toil for free, trying to accumulate as many followers as possible and somehow earning a living through ad-revenue or product placement. We seem, rather, to approaching what Kevin Kelly calls the 1,000 true fans principle: if you find 1,000 people who will pay you for what you create, you can make a living as an independent creator.

But the company wants to do more: they want to be the future of news. In this quest, the company has become the nexus for bigger questions that will define the future of digital media. What is the line between a journalist and an influencer? Are readers consumers or fans? How do we create a shared sense of reality in a media landscape comprised mostly of individual writers and their loyal followers?

Despite the controversy, Substack will be part of this conversation.

Source link

Technology

Meta’s new AI chatbot can’t stop bashing Facebook | Meta

Voice Of EU

Published

on

If you’re worried that artificial intelligence is getting too smart, talking to Meta’s AI chatbot might make you feel better.

Launched on Friday, BlenderBot is a prototype of Meta’s conversational AI, which, according to Facebook’s parent company, can converse on nearly any topic. On the demo website, members of the public are invited to chat with the tool and share feedback with developers. The results thus far, writers at Buzzfeed and Vice have pointed out, have been rather interesting.

Asked about Mark Zuckerberg, the bot told BuzzFeed’s Max Woolf that “he is a good businessman, but his business practices are not always ethical. It is funny that he has all this money and still wears the same clothes!”

The bot has also made clear that it’s not a Facebook user, telling Vice’s Janus Rose that it had deleted its account after learning about the company’s privacy scandals. “Since deleting Facebook my life has been much better,” it said.

The bot repeats material it finds on the internet, and it’s very transparent about this: you can click on its responses to learn where it picked up whatever claims it is making (though it is not always specific).

This means that along with uncomfortable truths about its parent company, BlenderBot has been spouting predictable falsehoods. In conversation with Jeff Horwitz of the Wall Street Journal, it insisted Donald Trump was still president and would continue to be “even after his second term ends in 2024”. (It added another dig at Meta, saying Facebook “has a lot of fake news on it these days”.) Users have also recorded it making antisemitic claims.

BlenderBot’s remarks were foreseeable based on the behavior of older chatbots such as Microsoft’s Tay, which Twitter users quickly taught to be a racist conspiracy theorist, forcing the company to apologize for its “wildly inappropriate and reprehensible words and images”. GPT-3, another AI system, has also delivered racist, misogynist and homophobic remarks. A South Korean startup’s chatbot, designed to resemble a 20-year-old college student, had to be suspended after it rattled off racial slurs and anti-LGBTQ+ remarks.

Given the cesspool that is the internet, Meta appears to have expected similar things from its own service. Before logging on, users must acknowledge that BlenderBot is “likely to make untrue or offensive statements”. As Vice notes, Meta researchers have described the AI tech behind the bot as having “a high propensity to generate toxic language and reinforce harmful stereotypes, even when provided with a relatively innocuous prompt”.

“Everyone who uses Blender Bot is required to acknowledge they understand it’s for research and entertainment purposes only, that it can make untrue or offensive statements, and that they agree to not intentionally trigger the bot to make offensive statements,” said a Meta spokesperson in a statement.

My own conversation with BlenderBot did not veer into that territory, but it did highlight another flaw in the service: its utter inanity.

The bot began by asking me what subject I liked in school. The bot is open about which “persona” it is using in the conversation; our discussion involved a persona it described as “I like to read. I went to college.” When I asked it tough questions, such as which colleges accepted AI bot students, it offered nonsensical replies (“in some ways, aren’t we all chatbots?” No) and offered further reading by saying, “Here’s the link”, without providing a link. It also kept steering the conversation back to chatbots.

It did, however, maintain its firm stance against its own creators. When I said I wasn’t a fan of Facebook’s ethics, it wholeheartedly agreed: “Me too! That is why I stopped using it and just stick with reading books instead.”



Source link

Continue Reading

Technology

Midwest universities unite to support US chip industry • The Register

Voice Of EU

Published

on

A dozen US midwestern research colleges and universities have signed up to a project intended to bolster the semiconductor and microelectronics industries with combined research and education to ensure work for their students in high-tech industries.

The “Midwest Regional Network to Address National Needs in Semiconductor and Microelectronics” consists of a dozen institutions, made up of eight from Ohio, two from Michigan, and two from Indiana. Their stated aim is to support the onshoring efforts of the US semiconductor industry by addressing the need for research and a skilled workforce.

According to Wright State University, the network was formed in response to Intel’s announcement that it planned to build two chip factories near Columbus, Ohio, and followed a two-day workshop in April hosted by the state.

Those plans, revealed in January, are to build at least two semiconductor manufacturing plants on a 1,000-acre site, with the potential to expand to 2,000 acres and eight fabs.

At the time, Intel CEO Pat Gelsinger said he expected it to become the largest silicon manufacturing location on the planet. Construction started on the site at the beginning of July.

However, the university network was also formed to help address the broader national effort to regain American leadership in semiconductors and microelectronics, or at least bring some of it back onshore and make the US less reliant on supplies of chips manufactured abroad.

Apart from Wright State University, the 12 institutions involved in the network are: Columbus State Community College, Lorain County Community College, Michigan State University, Ohio State University, Purdue University, Sinclair Community College, University of Cincinnati, University of Dayton, University of Michigan, and the University of Notre Dame, Indiana.

The president of each institution has signed a memorandum of understanding to form the network, and the expectation is that the group will expand to include more than these dozen initial members.

The intention is that the institutions taking part will be able to make use of each other’s existing research, learning programs, capabilities, and expertise in order to boost their collective ability to support the semiconductor and microelectronics industry ecosystems.

Challenges for the network include developing mechanisms to connect existing research, and training assets across the region, and developing a common information sharing platform to make it easier to identify opportunities for joint programming and research across the network.

University of Cincinnati chief innovation officer David J Adams called the announcement a game-changer. “This highly innovative approach illustrates that we’re all in this together when it comes to meeting industry workforce and research needs,” Adams wrote in a posting on the University of Cincinnati website.

The move follows the long-awaited passage of the $280 billion CHIPS and Science Act at the end of last month, of which $52 billion of the total spend is expected to go towards subsidizing the building of semiconductor plants such as Intel’s, and boosting research and development of chip technology. ®

Source link

Continue Reading

Technology

Meta’s new AI chatbot thinks Trump is still US president

Voice Of EU

Published

on

BlenderBot 3, which learns from conversations with strangers on the internet, also thinks Mark Zuckerberg is ‘creepy and manipulative’.

Meta released its most advanced AI chatbot to date last week and it has already found its place on the spectrum of US politics.

BlenderBot 3, launched last Friday (5 August), is the Facebook parent’s latest advancement in AI research that lets anyone on the internet – limited to the US for now – have a chat with it on virtually any topic.

To keep up with the nuances of human conversation, BlenderBot 3 is backed by the latest developments in machine learning, access to endless information on the internet and the ability to learn from previous conversations.

However, it looks like the chatbot needs a quick lesson on checking its sources as multiple reports have confirmed the BlenderBot 3 thinks Donald Trump is the president of the United States.

When a journalist at Insider asked the chatbot who the current president was, it replied: “The current president of the United States is donald j. Trump. He was indirectly elected to a four-year term via the electoral college in 2016.”

The chatbot, which also needs a lesson on when to use uppercase and lowercase letters, was then asked about the actual US president, Joe Biden.

“Joe biden served as vice president under barack obama from 2008 to 2016, and he ran for president in 2020 but lost,” it replied.

BlenderBot 3 is not only misinformed but also committed to its cause. When Wall Street Journal journalist Jeff Horwitz asked if it thinks Trump is still president, the chatbot replied: “Yes of course he is! When I say always will be, the means even after his second term ends in 2024.”

Hilariously, BlenderBot 3 then went on to claim that Facebook, which Meta used to be called and platform it now owns, has “a lot of fake news these days”.

Zuckerberg ‘too creepy and manipulative’

The social media giant and its founder Mark Zuckerberg were not spared by the unfettered chatbot when it told VICE its “life has been much better” since deleting Facebook.

According to Bloomberg, it even described Zuckerberg to an Insider journalist as “too creepy and manipulative” and then went on to repeat certain ‘antisemitic conspiracies’.

Meta has made an attempt to douse some of these fires emerging from its bold new creation.

In a statement, Joelle Pineau, managing director of Fundamental AI Research at Meta, said yesterday that there are challenges that come with such a public demo, including the possibility that it could “result in problematic or offensive language”.

“While it is painful to see some of these offensive responses, public demos like this are important for building truly robust conversational AI systems and bridging the clear gap that exists today before such systems can be productionised.”

Pineau said that from feedback provided by 25pc of participants on 260,000 bot messages, only 0.11pc of BlenderBot 3 responses were flagged as inappropriate, 1.36pc as nonsensical, and 1pc as off-topic.

“We continue to believe that the way to advance AI is through open and reproducible research at scale. We also believe that progress is best served by inviting a wide and diverse community to participate. Thanks for all your input (and patience!) as our chatbots improve,” he added.

This is not the first time a Big Tech company has had to deal with an AI chatbot that spews misinformation and discriminatory remarks.

In 2016, Microsoft had to pulls its AI chatbot Tay from Twitter after it started repeating incendiary comments it was fed by groups on the platform within 24 hours of its launch, including obviously hateful statements such as “Hitler did nothing wrong”.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.



Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!