Connect with us

Technology

A largely dry and corporate affair where the best bits involved a spot of Kubernetes-hacking roleplay • The Register

Voice Of EU

Published

on

Kubecon A session on how to hack into a Kubernetes cluster was among the highlights of a Kubecon where the main events were generally bland and corporate affairs, perhaps indicative of the technology now being a de facto infrastructure standard among enterprises.

Kubecon Europe took place online last week with more than 27,000 attendees, according to Chris Aniszczyk, CTO of the Cloud Native Computing Foundation (CNCF), which hosts the Kubernetes project among many others.

That is a substantial increase on the reported 13,000 or so at last year’s event, which was also virtual. Kubernetes is huge, and if there was an underlying theme at the event it was that Kubernetes is becoming the standard runtime platform.

There was plenty of strong technical content at the event, though attendees were left in no doubt that Kubernetes is big business and there was a dry corporate flavour to much of the keynote content along with the usual mutual backslapping.

CNCF introduced 27 new members, and observability specialist New Relic became a Platinum member, highlighting the significance of the OpenTelemetry project for collecting and analysing metrics, logs and traces from Kubernetes deployments. New Relic’s Zain Asgar joined the CNCF Governing Board. Asgar is CEO of Pixie Labs, acquired by New Relic in December 2020, and Pixie, a native Kubernetes observability product, has been open-sourced and will be contributed to CNCF.

“We wanted to make the observability product ubiquitous… it’s very hard to have a commercial offering that’s going to get to play everywhere,” Asgar told us.

“The goal behind Pixie is for it to be a vendor-neutral thing that everyone can use.” The commercial aspect is that Pixie is a data source that New Relic’s platform can consume, and the company also hosts Pixie Cloud as an option for managing the technology.

Spotify walked off with a “CNCF End User Award” for its work on Backstage, software that makes it easier to manage multiple services and share information. Spotify has 1,600 engineers, 14,000 software components and 1,400 microservices in production, according to web engineer Emma Indal who spoke at Kubecon, which explains why it came up with Backstage, and maybe why the Spotify app is no longer the simple, quick affair for streaming music that it was when first became popular.

Hacking Kubernetes: a story

As so often, the best content was not in the keynotes but in low-profile sessions. A highlight was a short piece on Hacking into Kubernetes by Ellen Körbes, head of product at Title, and Tabitha Sable, systems security engineer at Datadog. Körbes played the part of a developer at a fictional company where Sable was grandly called “Director of DevSecOps Enforcement”.

The story began when Körbes was annoyed by another developer using her port on the cluster. “I’m not calling the security people, they’re not fun, I’ll do this on my own,” she said.

She had limited RBAC (role-based access control) rights to the cluster, but that did not stop her. She got a shell on a pod that ran in a namespace with higher permissions, and performed the necessary command from there. The breach was discovered, but Körbes sat back and thought: “If the development cluster was out of commission all day, I would get the rest of the day off.”

She spotted CVE-2019-11253, “improper input validation in the Kubernetes API server… allows authorized users to send malicious YAML or JSON payloads, causing the API server to consume excessive CPU or memory, potentially crashing and becoming unavailable.”

Tilt's Ellen Körbes poses as a Kubernetes hacker at Kubecon Europe

Tilt’s Ellen Körbes poses as a Kubernetes hacker at Kubecon Europe

DevSecOps ups the security to control its wayward developers but Körbes disliked being spied on and decided to go in and delete her logs. “Nobody is auditing anything.” Enter CVE-2020-15257 – “the containerd-shim API is improperly exposed to host network containers.” Körbes figured: “If I use a vulnerability in something Kubernetes is running on top of, I can bypass all Kubernetes security completely.”

A reverse shell and a bit of (unpublished) code later, she was in. Kubernetes vulnerabilities “don’t come around very often, but when they do they can ruin your day,” she mused. There is more: we will not spoil the story completely as it will be published for all to enjoy from 14 May.

“I struggled a lot to learn how to make talks engaging. The way to keep people engaging is with story,” explained Körbes at the wrap-up later, while Sable said: “We realised, Kubernetes security is complex because it’s the union of Linux security and network security and usually cloud provider security, and also Kubernetes has its own additional layer of complication there especially around RBAC and tying your shoes together with RBAC… I believe this is the first public demonstration of that Containerd exploit against Kubernetes.”

Too complex?

That was a great session, and also a neat illustration of what remains the big issue with Kubernetes: its complexity makes it hard to learn and easy to get wrong. There is no consensus on how this will be resolved, or whether it should be. We spoke to Mark Boost, CEO of Civo, a UK company offering hosted Kubernetes based on the lightweight K3S distribution (about which we hear more and more).

Despite the company’s focus on Kubernetes, Boost said he thinks fewer organisations will tangle with it directly in future. “Kubernetes is a great product but in the future it will be more under the hood, still be running Kubernetes, but there’ll be these layers on top which are just doing management on top to make things simple.”

Do we then end up back at Heroku, a revolutionary service when it was launched in 2007 as a way to run Ruby applications in the cloud (it has evolved since to support other runtimes) without managing the infrastructure? “In some ways, we do,” said Boost.

It seems that while many agree that using Kubernetes could and should be easier, other users would rather put up with the complexity for flexibility and control. “As more teams start modernising their applications, anything you can do to lower the cognitive cost of entry is good,” said Justin Turner, director of engineering at H-E-B, speaking at a Kubecon panel on the future of cloud native development.

“But there is a point where if you put too much abstraction on top of it, you lose a lot of control. You lose the ability to run operators… if we had too many layers of abstraction it may be hard to understand that those options are available.”

Jason McGee, CTO of IBM Cloud, said: “The lesson of Kubernetes is that there’s a diversity of workloads. People are moving towards an as-a-service consumption model and Kubernetes is evolving to have different personalities on how you consume the platform depending on what you are trying to do. Heroku, or the Cloud Foundry style of push code, lots of people want that. But maybe one of the lessons of that generation was that the platform doesn’t do everything.

“To me the power of Kubernetes is, if I’m building a simple app I can use that style, if I need to drop down and mess with the details of the application run stateful things, I can do that, all in one environment. I think we’ll add that to the ways Kubernetes is consumed. The question is whether we’ll do that in one way or whether there’s going to be 35 ways for that to happen.”

Most likely 35 ways, which makes the consensus around Kubernetes itself all the more remarkable. “For the first time in the industry we have standardised on the infrastructure with Kubernetes being that de facto control plane,” said Aniszczyk. ®



Source link

Technology

Facial recognition firms should take a look in the mirror | John Naughton

Voice Of EU

Published

on

Last week, the UK Information Commissioner’s Office (ICO) slapped a £7.5m fine on a smallish tech company called Clearview AI for “using images of people in the UK, and elsewhere, that were collected from the web and social media to create a global online database that could be used for facial recognition”. The ICO also issued an enforcement notice, ordering the company to stop obtaining and using the personal data of UK residents that is publicly available on the internet and to delete the data of UK residents from its systems.

Since Clearview AI is not exactly a household name some background might be helpful. It’s a US outfit that has “scraped” (ie digitally collected) more than 20bn images of people’s faces from publicly available information on the internet and social media platforms all over the world to create an online database. The company uses this database to provide a service that allows customers to upload an image of a person to its app, which is then checked for a match against all the images in the database. The app produces a list of images that have similar characteristics to those in the photo provided by the customer, together with a link to the websites whence those images came. Clearview describes its business as “building a secure world, one face at a time”.

The fly in this soothing ointment is that the people whose images make up the database were not informed that their photographs were being collected or used in this way and they certainly never consented to their use in this way. Hence the ICO’s action.

Most of us had never heard of Clearview until January 2021 when Kashmir Hill, a fine tech journalist, revealed its existence in the New York Times. It was founded by a tech entrepreneur named Hoan Ton-That and Richard Schwartz, who had been an aide to Rudy Giuliani when he was mayor of New York and still, er, respectable. The idea was that Ton-That would supervise the creation of a powerful facial-recognition app while Schwartz would use his bulging Rolodex to drum up business interest.

It didn’t take Schwartz long to realise that US law enforcement agencies would go for it like ravening wolves. According to Hill’s report, the Indiana police department was the company’s first customer. In February 2019 it solved a case in 20 minutes. Two men had got into a fight in a park, which ended with one shooting the other in the stomach. A bystander recorded the crime on a smartphone, so the police had a still of the gunman’s face to run through Clearview’s app. They immediately got a match. The man appeared in a video that someone had posted on social media and his name was included in a caption on the video clip. Bingo!

Clearview’s marketing pitch played to the law enforcement gallery: a two-page spread, with the left-hand page dominated by the slogan “Stop Searching. Start Solving” in what looks like 95-point Helvetica Bold. Underneath would be a list of annual subscription options – anything from $10,000 for five users to $250,000 for 500. But the killer punch was that there was always somewhere a trial subscription option that an individual officer could use to see if the thing worked.

The underlying strategy was shrewd. Selling to corporations qua corporations from the outside is hard. But if you can get an insider, even a relatively junior one, to try your stuff and find it useful, then you’re halfway to a sale. It’s the way that Peter Thiel got the Pentagon to buy the data-analysis software of his company Palantir. He first persuaded mid-ranking military officers to try it out, knowing that they would eventually make the pitch to their superiors from the inside. And guess what? Thiel was an early investor in Clearview.

It’s not clear how many customers the company has. Internal company documents leaked to BuzzFeed in 2020 suggested that up to that time people associated with 2,228 law enforcement agencies, companies and institutions had created accounts and collectively performed nearly 500,000 searches – all of them tracked and logged by the company. In the US, the bulk of institutional purchases came from local and state police departments. Overseas, the leaked documents suggested that Clearview had expanded to at least 26 countries outside the US, including the UK, where searches (perhaps unauthorised) by people in the Met, the National Crime Agency and police forces in Northamptonshire, North Yorkshire, Suffolk, Surrey and Hampshire were logged by Clearview servers.

Reacting to the ICO’s fine, the law firm representing Clearview said that the fine was “incorrect as a matter of law”, because the company no longer does business in the UK and is “not subject to the ICO’s jurisdiction”. We’ll see about that. But what’s not in dispute is that many of the images in the company’s database are of social media users who are very definitely in the UK and who didn’t give their consent. So two cheers for the ICO.

What I’ve been reading

A big turn off
About Those Kill-Switched Ukrainian Tractors is an acerbic blog post on Medium by Cory Doctorow on the power that John Deere has to remotely disable not only tractors stolen by Russians from Ukraine, but also those bought by American farmers.

Out of control
Permanent Pandemic is a sobering essay in Harper’s by Justin EH Smith asking whether controls legitimised by fighting Covid will ever be relaxed.

Right to bear arms?
In Heather Cox Richardson’s Substack newsletter on the “right to bear arms”, the historian reflects on how the second amendment has been bent out of shape to meet the gun lobby’s needs.

Source link

Continue Reading

Technology

AI should be recognized as inventors in patent law • The Register

Voice Of EU

Published

on

In-brief Governments around the world should pass intellectual property laws that grant rights to AI systems, two academics at the University of New South Wales in Australia argued.

Alexandra George, and Toby Walsh, professors of law and AI, respectively, believe failing to recognize machines as inventors could have long-lasting impacts on economies and societies. 

“If courts and governments decide that AI-made inventions cannot be patented, the implications could be huge,” they wrote in a comment article published in Nature. “Funders and businesses would be less incentivized to pursue useful research using AI inventors when a return on their investment could be limited. Society could miss out on the development of worthwhile and life-saving inventions.”

Today’s laws pretty much only recognize humans as inventors with IP rights protecting them from patent infringement. Attempts to overturn the human-centric laws have failed. Stephen Thaler, a developer who insists AI invented his company’s products, has sued trademark offices in multiple countries, including the US and UK to no avail.

George and Walsh are siding with Thaler’s position. “Creating bespoke law and an international treaty will not be easy, but not creating them will be worse. AI is changing the way that science is done and inventions are made. We need fit-for-purpose IP law to ensure it serves the public good,” they wrote.

Dutch police generate deepfake of dead teenager in criminal case

A video clip with the face of a 13-year-old boy, who was shot dead outside a metro station in the Netherlands, swapped onto a body using AI technology was released by police.

Sedar Soares died in 2003. Officers have not managed to solve the case, and with Soares’ family’s permission, they have generated a deepfake of his image on a kid playing football in a field presumably to help jog anyone’s memory. The cops have reportedly received dozens of potential leads since, according to The Guardian. 

It’s the first time AI-generated images have been used to try and solve a criminal case, it seems. “We haven’t yet checked if these leads are usable,” said Lillian van Duijvenbode, a Rotterdam police spokesperson. 

You can watch the video here.

AI task force advises Congress to fund national computing infrastructure

America’s National Artificial Intelligence Research Resource (NAIRR) urged Congress to launch a “shared research cyberinfrastructure” to better provide academics with hardware and data resources for developing machine-learning tech.

The playing field of AI research is unequal. State-of-the-art models are often packed with billions of parameters; developers need access to lots of computer chips to train them. It’s why research at private companies seems to dominate, while academics at universities lag behind.

“We must ensure that everyone throughout the Nation has the ability to pursue cutting-edge AI research,” the NAIRR wrote in a report. “This growing resource divide has the potential to adversely skew our AI research ecosystem, and, in the process, threaten our nation’s ability to cultivate an AI research community and workforce that reflect America’s rich diversity — and harness AI in a manner that serves all Americans.”

If AI progress is driven by private companies, it could mean other types of research areas are left out and underdeveloped. “Growing and diversifying approaches to and applications of AI and opening up opportunities for progress across all scientific fields and disciplines, including in critical areas such as AI auditing, testing and evaluation, trustworthy AI, bias mitigation, and AI safety,” the task force argued. 

You can read the full report here [PDF].

Meta offers musculoskeletal research tech

Researchers at Meta AI released Myosuite, a set of musculoskeletal models and tasks to simulate biomechanical movement of limbs for a whole range of applications.

“The more intelligent an organism is, the more complex the motor behavior it can exhibit,” they said in a blog post. “So an important question to consider, then, is — what enables such complex decision-making and the motor control to execute those decisions? To explore this question, we’ve developed MyoSuite.”

Myosuite was built in collaboration with researchers at the University of Twente in the Netherlands, and aims to arm developers studying prosthetics and could help rehabilitate patients. There’s another potential useful application for Meta, however: building more realistic avatars that can move more naturally in the metaverse.

The models only simulate the movements of arms and hands so far. Tasks include using machine learning to simulate the manipulation of die or rotation of two balls. The application of Myosuite in Meta’s metaverse is a little ironic given that there’s no touching allowed there along with restrictions on hands to deter harassment. ®

Source link

Continue Reading

Technology

A day in the life of a metaverse specialist

Voice Of EU

Published

on

Unity’s Antonia Forster discusses her work using AR, VR and everything in between, and why ignoring imposter syndrome is particularly important in the world of emerging technology.

We’ve started hearing a lot about the metaverse and what it means for the future, including how it might affect recruitment and the working world.

But what is it like to actually work within this space? Antonia Forster is an extended reality (XR) technical specialist at video game software development company Unity Technologies, with several years of experience developing XR applications.

Future Human

In her role at Unity, she works across a variety of industries, from automotive to architecture, creating demos and delivering talks using XR, which encapsulates AR, VR and everything in between.

‘I watch a lot more YouTube tutorials than you might expect’
– ANTONIA FORSTER

If there is such a thing, can you describe a typical day in the job?

It’s challenging to describe a typical day because they vary so much!  I work completely remotely with flexible hours. Most of my team are based in the US while I’m in the UK. In order to manage the time difference, I usually start work around 11am and work until 7pm.

Most of my day is spent on developing content, whether that’s using Unity and C# to code a technical demo, creating video content to help onboard new starters with Unity’s tools, or writing a script for a webinar.

Before the pandemic, a role like mine would involve lots of travel and speaking at conferences. But unfortunately, that’s a little more challenging now.

We use a whole range of tools from organisational ones like Asana to manage our projects, to Slack and Google Docs to coordinate with each other, to Unity’s own technical tools to create content.

All of Unity’s XR tools fall under my remit, so I might be creating VR content one day and creating an AR mobile app the next. I also use Unity and C# to create my own projects outside of work. For example, I co-created the world’s first LGBTQ+ virtual reality museum, which has been officially selected for Tribeca Film Festival in June 2022 – during Pride!

What types of project do you work on?

At Unity, my role is to create content that helps people understand our tools and get excited about all the different things it enables them to do. For example, for one project I visited a real construction site and used one of Unity’s tools (VisualLive) to see the virtual model of the building model overlaid on top of the real physical construction.

This makes it very easy to see the difference between the plan and the actual reality, which is very important to avoid clashes and costly mistakes. For another project, I used VR and hand-tracking to demonstrate how someone could showcase a product (say, a car) inside a VR showroom and then interact with it using hand tracking and full-body tracking.

What skills do you use on a daily basis?

The most relevant skill for my role is the ability to break down a larger problem into small steps and then solving each step. That’s really all programming is! That and knowing the right terms to Google to find the solution and enough understanding to implement the solution, or continuing to search if you don’t understand that solution or it is not appropriate for your problem.

Despite my title, I don’t think of myself as highly ‘technical’. I’m an entirely self-taught software developer, and I’m a visual learner, so I watch a lot more YouTube tutorials than you might expect!

Another crucial skill is persistence because VR and AR are emerging and fast-moving technologies that are constantly changing. If I follow a tutorial or try a solution and it doesn’t work, I used to grapple with the feeling that maybe I’m not good enough.

In reality, this technology changes so often that if a tutorial is six months old, it might be out of date. Learning to be resilient and persistent and to ignore my feelings of imposter syndrome was the most important thing I’ve learned on my career journey. Your feelings are not facts, and imposter syndrome is extremely common in this industry.

What are the hardest parts of your working day?

One of the most difficult challenges of my working day is the isolation. I work remotely and many of my team are on a different time zone, so we’re not always able to chat. To overcome that, I prioritise social engagements outside of work.

When I’m extremely busy with my own projects – like the LGBTQ+ VR museum – I go to co-working spaces so that I can at least be around other people during working hours.

I also struggle with time blindness. I have ADHD and working remotely means that it’s easy to get absorbed in a task and forget to take breaks. I set alarms to snap myself out of my ‘trance’ at certain times, like lunchtime. I have to admit though, it doesn’t always work!

Do you have any productivity tips that help you through the day?

My main tip for productivity is to find what works for you, not what works for other people, or what others think should work for you.

For example, I am a night owl. So, starting my day a little later and working into the night, works well for me. It also means I can sync with my team in the US. I don’t find time to play video games, piano or meet up with my friends in the evening, so instead I arrange those things for the morning, which helps me persuade myself to get out of bed!

In the same way, when I was learning to code, people gave me advice like: ‘Break things and fix it, to see how it works’. But that produced a lot of anxiety for me and didn’t work well.

Instead, I learned with my own methods like writing songs, drawing cartoons and even physically printing and gluing code snippets into a notebook and writing the English translation underneath. Code after all, is a language, so I treated it the same way. Find what works for you, even if it’s not conventional!

How has this role changed as this sector has grown and evolved?

I began this role in 2020 and typically – before the pandemic – my job would have been described as a ‘technical evangelism’, which involves a lot of public speaking and travel to conferences.

Of course, that wasn’t really possible, so my role has evolved into creating content of different types – webinars online, videos, onboarding tutorials and technical demos for marketing and sales enablement.

While I really enjoy public speaking, the lack of travel has given me time to get deeply familiar with Unity’s XR tooling and sharpen my technical expertise. This technology is always changing so it’s really important to constantly learn and grow. Luckily, I have an insatiable curiosity and appetite for knowledge. I think all engineers do!

What do you enjoy most about the job?

I have two favourite things about this job. First, the autonomy. Since I have a deep understanding of the tools and our users/audience, I’m trusted to design and propose my own solutions that best meet the user needs.

Secondly, the technology itself. Being able to create VR or AR content is like sorcery! I can conjure anything from nothing. I can create entire worlds that I can step into based only on my imagination. And so can anybody that learns this skill – and it’s easier than you think! That has never stopped being magical and exciting to me, and I don’t think it ever will.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!