Connect with us

Technology

Why online learning could be key to closing the STEM gender gap

Voice Of EU

Published

on

Coursera’s Anthony Tattersall discusses the importance of closing the gender gap in STEM industries and how online learning could help.

STEM subjects have long-standing problems with proper gender representation. A recent report by the World Economic Forum highlights that just 30pc of STEM researchers are women, men publish more than their female colleagues and women are paid significantly less.

Closing this gap is vital. Careers in STEM are critical in shaping the world we live in. But how do we get there?

Experts say the way academic curricula are designed can make an important difference. With the rise of online learning as a result of the Covid-19 pandemic, we should look at how this particular medium can help. Here are a few ways in which online learning can support us in closing the STEM gender gap in higher education.

Providing flexibility and accessibility

Online learning can happen any time, anywhere, helping students juggle studies with careers and other personal commitments. This flexibility can be especially attractive to women, whose working hours are often reduced because of childcare reasons. The chances that they continue learning during this time are likely to be even lower.

In turn, STEM courses delivered online could encourage more women to study while raising children or after their children have left home.

Offering stackable and modular content

Short courses in STEM subjects allow students with no background in the field to explore new skills in shorter time frames. Students can earn smaller credentials which can be stacked and count towards larger qualifications or degrees.

This stackability and modularity can be particularly helpful to break with stereotypes in STEM subjects, which can often be seen as ‘difficult’ or simply ‘for boys’.

Hands-on learning

Research shows that students respond better when STEM courses and careers are positioned as a way to solve problems and improve lives.

With many online courses, students can gain skills in the context of different real-life scenarios such as machine learning for predicting cervical cancer risk or Python for simulating viral pandemics. These courses generally take less than two hours to complete and are offered with step-by-step guidance from an instructor.

Creating safe spaces

Research from the University of Cambridge shows that women are two and a half times less likely to ask questions in seminars than men.

Online learning that uses resources such as video office hours and Slack integrations can help combat this challenge by providing a ‘safer space’ for women to communicate and collaborate with teachers and peers.

Scaling under-represented experts

Having inspiring women mentors teaching STEM courses can provide women with a view of what their future careers may look like in the field and how studies can lead to real, tangible success.

Online learning can help provide an opportunity to improve the reach and access of more diverse role models like these. The more this happens, the more women are likely to engage with STEM courses and excel in them.

Boosting gender equality in STEM subjects can help improve the skills gap, increase the employment and productivity of women and reduce occupational stereotypes.

We know that change begins in the classroom and that the current pandemic, while incredibly challenging for educators, provides a crucial opportunity: to harness the power of online learning to inspire and engage women with STEM subjects and broader careers in the field.

By Anthony Tattersall

Anthony Tattersall is the vice-president of enterprise for EMEA at online course provider Coursera.

Source link

Technology

Meta’s new AI chatbot can’t stop bashing Facebook | Meta

Voice Of EU

Published

on

If you’re worried that artificial intelligence is getting too smart, talking to Meta’s AI chatbot might make you feel better.

Launched on Friday, BlenderBot is a prototype of Meta’s conversational AI, which, according to Facebook’s parent company, can converse on nearly any topic. On the demo website, members of the public are invited to chat with the tool and share feedback with developers. The results thus far, writers at Buzzfeed and Vice have pointed out, have been rather interesting.

Asked about Mark Zuckerberg, the bot told BuzzFeed’s Max Woolf that “he is a good businessman, but his business practices are not always ethical. It is funny that he has all this money and still wears the same clothes!”

The bot has also made clear that it’s not a Facebook user, telling Vice’s Janus Rose that it had deleted its account after learning about the company’s privacy scandals. “Since deleting Facebook my life has been much better,” it said.

The bot repeats material it finds on the internet, and it’s very transparent about this: you can click on its responses to learn where it picked up whatever claims it is making (though it is not always specific).

This means that along with uncomfortable truths about its parent company, BlenderBot has been spouting predictable falsehoods. In conversation with Jeff Horwitz of the Wall Street Journal, it insisted Donald Trump was still president and would continue to be “even after his second term ends in 2024”. (It added another dig at Meta, saying Facebook “has a lot of fake news on it these days”.) Users have also recorded it making antisemitic claims.

BlenderBot’s remarks were foreseeable based on the behavior of older chatbots such as Microsoft’s Tay, which Twitter users quickly taught to be a racist conspiracy theorist, forcing the company to apologize for its “wildly inappropriate and reprehensible words and images”. GPT-3, another AI system, has also delivered racist, misogynist and homophobic remarks. A South Korean startup’s chatbot, designed to resemble a 20-year-old college student, had to be suspended after it rattled off racial slurs and anti-LGBTQ+ remarks.

Given the cesspool that is the internet, Meta appears to have expected similar things from its own service. Before logging on, users must acknowledge that BlenderBot is “likely to make untrue or offensive statements”. As Vice notes, Meta researchers have described the AI tech behind the bot as having “a high propensity to generate toxic language and reinforce harmful stereotypes, even when provided with a relatively innocuous prompt”.

“Everyone who uses Blender Bot is required to acknowledge they understand it’s for research and entertainment purposes only, that it can make untrue or offensive statements, and that they agree to not intentionally trigger the bot to make offensive statements,” said a Meta spokesperson in a statement.

My own conversation with BlenderBot did not veer into that territory, but it did highlight another flaw in the service: its utter inanity.

The bot began by asking me what subject I liked in school. The bot is open about which “persona” it is using in the conversation; our discussion involved a persona it described as “I like to read. I went to college.” When I asked it tough questions, such as which colleges accepted AI bot students, it offered nonsensical replies (“in some ways, aren’t we all chatbots?” No) and offered further reading by saying, “Here’s the link”, without providing a link. It also kept steering the conversation back to chatbots.

It did, however, maintain its firm stance against its own creators. When I said I wasn’t a fan of Facebook’s ethics, it wholeheartedly agreed: “Me too! That is why I stopped using it and just stick with reading books instead.”



Source link

Continue Reading

Technology

Midwest universities unite to support US chip industry • The Register

Voice Of EU

Published

on

A dozen US midwestern research colleges and universities have signed up to a project intended to bolster the semiconductor and microelectronics industries with combined research and education to ensure work for their students in high-tech industries.

The “Midwest Regional Network to Address National Needs in Semiconductor and Microelectronics” consists of a dozen institutions, made up of eight from Ohio, two from Michigan, and two from Indiana. Their stated aim is to support the onshoring efforts of the US semiconductor industry by addressing the need for research and a skilled workforce.

According to Wright State University, the network was formed in response to Intel’s announcement that it planned to build two chip factories near Columbus, Ohio, and followed a two-day workshop in April hosted by the state.

Those plans, revealed in January, are to build at least two semiconductor manufacturing plants on a 1,000-acre site, with the potential to expand to 2,000 acres and eight fabs.

At the time, Intel CEO Pat Gelsinger said he expected it to become the largest silicon manufacturing location on the planet. Construction started on the site at the beginning of July.

However, the university network was also formed to help address the broader national effort to regain American leadership in semiconductors and microelectronics, or at least bring some of it back onshore and make the US less reliant on supplies of chips manufactured abroad.

Apart from Wright State University, the 12 institutions involved in the network are: Columbus State Community College, Lorain County Community College, Michigan State University, Ohio State University, Purdue University, Sinclair Community College, University of Cincinnati, University of Dayton, University of Michigan, and the University of Notre Dame, Indiana.

The president of each institution has signed a memorandum of understanding to form the network, and the expectation is that the group will expand to include more than these dozen initial members.

The intention is that the institutions taking part will be able to make use of each other’s existing research, learning programs, capabilities, and expertise in order to boost their collective ability to support the semiconductor and microelectronics industry ecosystems.

Challenges for the network include developing mechanisms to connect existing research, and training assets across the region, and developing a common information sharing platform to make it easier to identify opportunities for joint programming and research across the network.

University of Cincinnati chief innovation officer David J Adams called the announcement a game-changer. “This highly innovative approach illustrates that we’re all in this together when it comes to meeting industry workforce and research needs,” Adams wrote in a posting on the University of Cincinnati website.

The move follows the long-awaited passage of the $280 billion CHIPS and Science Act at the end of last month, of which $52 billion of the total spend is expected to go towards subsidizing the building of semiconductor plants such as Intel’s, and boosting research and development of chip technology. ®

Source link

Continue Reading

Technology

Meta’s new AI chatbot thinks Trump is still US president

Voice Of EU

Published

on

BlenderBot 3, which learns from conversations with strangers on the internet, also thinks Mark Zuckerberg is ‘creepy and manipulative’.

Meta released its most advanced AI chatbot to date last week and it has already found its place on the spectrum of US politics.

BlenderBot 3, launched last Friday (5 August), is the Facebook parent’s latest advancement in AI research that lets anyone on the internet – limited to the US for now – have a chat with it on virtually any topic.

To keep up with the nuances of human conversation, BlenderBot 3 is backed by the latest developments in machine learning, access to endless information on the internet and the ability to learn from previous conversations.

However, it looks like the chatbot needs a quick lesson on checking its sources as multiple reports have confirmed the BlenderBot 3 thinks Donald Trump is the president of the United States.

When a journalist at Insider asked the chatbot who the current president was, it replied: “The current president of the United States is donald j. Trump. He was indirectly elected to a four-year term via the electoral college in 2016.”

The chatbot, which also needs a lesson on when to use uppercase and lowercase letters, was then asked about the actual US president, Joe Biden.

“Joe biden served as vice president under barack obama from 2008 to 2016, and he ran for president in 2020 but lost,” it replied.

BlenderBot 3 is not only misinformed but also committed to its cause. When Wall Street Journal journalist Jeff Horwitz asked if it thinks Trump is still president, the chatbot replied: “Yes of course he is! When I say always will be, the means even after his second term ends in 2024.”

Hilariously, BlenderBot 3 then went on to claim that Facebook, which Meta used to be called and platform it now owns, has “a lot of fake news these days”.

Zuckerberg ‘too creepy and manipulative’

The social media giant and its founder Mark Zuckerberg were not spared by the unfettered chatbot when it told VICE its “life has been much better” since deleting Facebook.

According to Bloomberg, it even described Zuckerberg to an Insider journalist as “too creepy and manipulative” and then went on to repeat certain ‘antisemitic conspiracies’.

Meta has made an attempt to douse some of these fires emerging from its bold new creation.

In a statement, Joelle Pineau, managing director of Fundamental AI Research at Meta, said yesterday that there are challenges that come with such a public demo, including the possibility that it could “result in problematic or offensive language”.

“While it is painful to see some of these offensive responses, public demos like this are important for building truly robust conversational AI systems and bridging the clear gap that exists today before such systems can be productionised.”

Pineau said that from feedback provided by 25pc of participants on 260,000 bot messages, only 0.11pc of BlenderBot 3 responses were flagged as inappropriate, 1.36pc as nonsensical, and 1pc as off-topic.

“We continue to believe that the way to advance AI is through open and reproducible research at scale. We also believe that progress is best served by inviting a wide and diverse community to participate. Thanks for all your input (and patience!) as our chatbots improve,” he added.

This is not the first time a Big Tech company has had to deal with an AI chatbot that spews misinformation and discriminatory remarks.

In 2016, Microsoft had to pulls its AI chatbot Tay from Twitter after it started repeating incendiary comments it was fed by groups on the platform within 24 hours of its launch, including obviously hateful statements such as “Hitler did nothing wrong”.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.



Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!