Connect with us

Technology

W3C overrules Google, Mozilla’s objections to identifiers • The Register

Voice Of EU

Published

on

The World Wide Web Consortium (W3C) has rejected Google’s and Mozilla’s objections to the Decentralized Identifiers (DID) proposal, clearing the way for the DID specification to be published a W3C Recommendation next month.

The two tech companies worry that the open-ended nature of the spec will promote chaos through a namespace land rush that encourages a proliferation of non-interoperable method specifications. They also have concerns about the ethics of relying on proof-of-work blockchains to handle DIDs.

The DID specification describes a way to deploy a globally unique identifier without a centralized authority (eg, Apple for Sign in with Apple) as a verifying entity.

“They are designed to enable individuals and organizations to generate their own identifiers using systems they trust,” the specification explains. “These new identifiers enable entities to prove control over them by authenticating using cryptographic proofs such as digital signatures.”

The goal for DIDs is to have: no central issuing agency; an identifier that persists independent of any specific organization; the ability to cryptographically prove control of an identifier; and the ability to fetch metadata about the identifier.

These identifiers can refer to people, organizations, documents, or other data.

DIDs conform to the URI schema: did:example:123456789abcdefghi. Here “did” represents the scheme, “example” represents the DID method, and “123456789abcdefghi” represents the DID method-specific identifier.

“DID methods are the mechanism by which a particular type of DID and its associated DID document are created, resolved, updated, and deactivated,” the documentation explains.

This would be expressed in a DID document, which is just a JSON Object that contains other key-value data describing things like how to verify the DID controller (the entity able to change the DID document, typically through control of cryptographic keys) in order to have a trusted, pseudonymous interaction.

What Google and Mozilla object to is that the DID method is left undefined, so there’s no way to evaluate how DIDs will function nor determine how interoperation will be handled.

“DID-core is only useful with the use of ‘DID methods’, which need their own specifications,” Google argued. “… It’s impossible to review the impact of the core DID specification on the web without concurrently reviewing the methods it’s going to be used with.”

A DID method specification represents a novel URI scheme, like the http scheme [RFC7230] but each being different. For example, there’s the trx DID method specification, the web DID method specification, and the meme DID method specification.

These get documented somewhere, such as GitHub, and recorded in a verifiable data registry, which in case you haven’t guessed by now is likely to be a blockchain – a distributed, decentralized public ledger.

However, there is a point of centralization: the W3C DID Working Group, which has been assigned to handle dispute resolution over DID method specs that violate any of the eight registration process policies.

Mozilla argues the specification is fundamentally broken and should not be advanced to a W3C Recommendation.

“The DID architectural approach appears to encourage divergence rather than convergence & interoperability,” wrote Tantek Çelik, web standards lead at Mozilla, in a mailing list post last year. “The presence of 50+ entries in the registry, without any actual interoperability, seems to imply that there are greater incentives to introduce a new method, than to attempt to interoperate with any one of a number of growing existing methods.”

Mozilla significantly undercounted. There are currently 135 entities listed by the W3C’s DID Working Group, up from 105 in June 2021 and 86 in February 2021 as the spec was being developed. If significant interest develops in creating DID methods, the W3C – which this week said it is pursuing public-interest non-profit status – may find itself unprepared to oversee things.

Google and Mozilla also raised other objections during debates about the spec last year. As recounted in a mailing list discussion by Manu Sporny, co-founder and CEO of Digital Bazaar, Google representatives felt the spec needed to address DID methods that violate ethical or privacy norms by, for example, allowing pervasive tracking.

Both companies also objected to the environmental harm of blockchains.

“We (W3C) can no longer take a wait-and-see or neutral position on technologies with egregious energy use,” Çelik said. “We must instead firmly oppose such proof-of-work technologies including to the best of our ability blocking them from being incorporated or enabled (even optionally) by any specifications we develop.”

Despite these concerns, as well as resistance from Apple and Microsoft, the W3C overruled the objections in a published decision, a requirement for advancing the spec’s status. ®

Source link

Technology

India’s latest rocket flies but payloads don’t prosper • The Register

Voice Of EU

Published

on

India’s small satellite launch vehicle (SSLV) made a spectacular debut launch on Sunday, but the mission fell short of overall success when two satellites were inserted into the incorrect orbit, rendering them space junk.

The SSLV was developed to carry payloads of up to 500 kg to low earth orbits on an “on-demand basis”. India hopes the craft will let its space agency target commercial launches.

Although it is capable of achieving 500 km orbits, SSLV’s Saunday payload was an 135 kg earth observation satellite called EOS-2 and student-designed 8 kg 8U cubesat AzaadiSAT. Both were intended for a 356 km orbit at an inclination of about 37 degrees.

That rocket missed that target.

Indian Space Research Organisation (ISRO) identified the root cause of the failure Sunday night: a failure of logic to identify a sensor failure during the rocket stage.

ISRO further tweeted a committee would analyse the situation and provide recommendations as the org prepared for SSLV-D2.

ISRO Chairman S Somanath further explained the scenario in a video statement, before vowing to become completely successful in the second development flight of SSLV. “The vehicle took off majestically,” said Somanath who categorized the three rocket stages and launch as a success.

“However, we subsequently noticed an anomaly in the placement of the satellites in the orbit. The satellites were placed in an elliptical orbit in place of a circular orbit,” caveated the chairman.

Somanath said the satellites could not withstand the atmospheric drag in the elliptical orbit and had already fallen and become “no longer usable.” The sensor isolation principle is to be corrected before SSLV’s second launch to occur “very soon.”

Although ISRO has put on a brave face, its hard to imagine the emotions of the school children who designed AzaadiSat. According to the space org, the satellite was built by female students in rural regions across the country, with guidance and integrated by the student team of of student space-enthusiast org Space Kidz India.

EOS-2 was designed by ISRO and was slated to offer advanced optical remote sensing in infra-red band with high spatial resolution. ®



Source link

Continue Reading

Technology

The top languages you need for app development

Voice Of EU

Published

on

Code Institute’s Daragh Ó Tuama explains what budding app developers need to know when it comes to programming languages.

App development is the intricate process of designing, implementing and developing mobile applications. The applications are either developed by independent professional freelancers or by a team of skilled developers belonging to a giant firm.

There are countless aspects to consider when it comes to application development, such as the size of the app, the design, the concept and many more. To obtain optimum results, a proficient developer should be knowledgeable in all of these areas.

Is it, however, simple to create an application? The answer is up to you. It is really simple to develop an app if you understand and practise adequately.

The first thing, even before choosing a programming language, one should decide on which platform they are writing the program for. As we all know, there are two major platforms for mobile applications: iOS and Android. So, to begin, choose one of the two options.

You can choose one or both, but you must be familiar with two concepts: native development and cross-platform programming.

With native development, developers choose one platform and produce programs exclusively for that platform. If you’re a native Android developer, you create native Android apps that only run on Android; similarly, if you’re an iOS developer, you build native iOS apps that only work on iOS.

Cross-platform development is the term used to describe applications that are created once and can operate on any platform, including Android and iOS.

After choosing the above options, one should learn the related programming languages.

Python

Whether it is software, website or app development, there is no way Python is not used in it.

The increasingly popular programming language, which is recognised for its simple syntax and robust features, has garnered a reputation among novices and professionals alike.

Python is used to programme the back-ends of several prominent applications that we use on a daily basis, such as YouTube, Instagram and Pinterest. We can see Python’s power by looking at the above apps, which are noted for their popularity, efficiency and security.

Other reasons to learn Python:

  • Easy to read, learn and write codes
  • It is an interpreted language
  • Free and open source
  • Has extensive library support
  • Python is flexible

Python is also widely used in various technology fields, including machine learning, data analytics and many more.

JavaScript

When it comes to creating applications for the web, there are some programming languages you must know to be considered a professional, and top of the list of must-know programming languages is JavaScript.

JavaScript is required for the distinctive features you put in your program to perform tasks seamlessly on any device or platform.

Also, it is a full-stack language, which means with JavaScript you can build an interactive and visually appealing front-end and an efficient and powerful back-end too.

Other reasons to learn JavaScript:

  • Since it is an interpreted language, the speed of execution is immaculate
  • The structure of the syntax is simple and easy to grasp
  • JavaScript works smoothly along with other languages
  • With JavaScript, developers can add rich features to their applications
  • It has multiple valuable frameworks such as jQuery, Angular, Vue and Svelte

Along with JavaScript frameworks, developers can develop platform-independent applications.

Java

Java is an approved language for developing Android apps. Therefore, to commence your app developer journey, studying Java will most likely not only help you master app development rapidly, but will also assist you in quickly understanding other relevant languages.

Java has its own set of open-source libraries, including a wealth of functionalities and APIs that developers may easily integrate into their coding.

Other reasons to learn Java:

  • Java is an object-oriented language
  • Java can execute in various settings, including virtual machines and browsers
  • Code reusability and portability
  • Strong memory management

Another upside of mastering Java is its omnipresence. Since Java is a versatile programming language, it is also employed in website and software development. By learning it, you can learn more than just app development and may be handy in the long run if you need to change careers.

Kotlin

Kotlin is yet another official language of Android development. This is thanks to its roots in Java. So yes, Kotlin is very similar to Java and may be thought of as a more advanced version of Java programming.

Kotlin allows developers to create more robust and complex mobile applications.

Other reasons to learn Kotlin:

  • Writing programs in Kotlin means less robust code
  • It’s fully compatible with Java
  • Developers can use Kotlin to construct platform-independent applications
  • It features a simple and straightforward syntax
  • Includes Android and SDK toolkit

Kotlin might be a wonderful and accessible alternative for novices who find Java difficult.

Dart

Dart is a relatively new programming language when compared to other languages that have been around for a long time.

It may be used on both the front-end and the back-end. The syntax is comparable to C, making it simple to pick up.

Another distinctive aspect of Dart is that it is a programming language created especially for Android development by Google.

Other reasons to learn Dart:

  • It has a clean syntax
  • It has a set of versatile tools to help in programming
  • Dart is portable
  • It is used by Flutter
  • Can write and run the code anywhere

Dart also allows developers to create web-based applications in addition to mobile apps.

Swift

Swift is a programming language built specifically for designing and developing mobile applications, but only for iOS.

Created by tech giant Apple, Swift is a multi-paradigm, general-purpose compiled programming language.

Prior to the introduction of Swift, the preferred and customary programming language for iOS app development was Objective C. Swift’s versatility and durability has supplanted the necessity for Objective C.

Other reasons to learn Swift:

  • It has a concise code structure
  • It has efficient memory management
  • Swift is fast to execute
  • It supports dynamic libraries
  • It is compatible with objective C

As one of the most popular programming languages for iOS app developers, Swift allows users to learn and develop applications quickly and easily.

C++

Although not exactly a preferred programming language for app development, with C++ developers can expect to create robust applications.

C++ is used to create Android apps and native app development. Mainly, using this programming language, games, cloud and banking applications are created.

Other reasons to learn C++:

  • C++ is a multi-paradigm programming language
  • C++ is an object-oriented programming language and includes classes, inheritance, polymorphism, data abstraction and encapsulation
  • Supports dynamic memory allocation
  • C++ codes run faster
  • It is a platform-independent language

Because C++ applications can run on any platform, developers can use it to create cross-platform apps for Android, iOS and Windows.

Learn core concepts

Having a solid grasp of fundamentals is necessary to become a versatile app developer. Without mastering them, building complex applications will become tedious.

The following are some fundamental notions in every programming language:

  • Variables
  • Data structures
  • Syntax
  • Control structures
  • Tools

Choose a good programming course

One needs a mentor to grasp and understand the intricacies of a programming language or a related profession.

Before choosing a course, make sure that course is for you. For example, if you are a beginner, choose courses that are created for beginners that can give you a generous tech stack. On the other hand, if you already have adequate programming knowledge, you can either choose the beginner ones or go for intermediate ones.

Join the community

Each and every programming language has a dedicated community that is active with a vast number of skilled developers. Joining such communities will help you keep up to date about the latest features and tactics of the particular language.

Some of the popular platforms for programming communities are:

  • Stack Overflow
  • Reddit subreddits
  • GitHub

For instance, if you are learning Python, join the Python community on any of the above platforms. The same goes for other programming languages.

Also, if you have any queries regarding any errors of concepts, you can find answers in these communities since most doubts you face are not new.

Build mini applications

While learning app development, try putting your knowledge into work during the learning period instead of waiting for the course to end.

Try building mini applications at first. It can be as simple as a Hello World app that displays ‘hello world’. Then try upgrading to the calculator, memo, weather forecast and many more.

Since programming is a skill that grows only through practise, it is essential to practise while learning.

While developing mini projects, it is also customary to face errors. Instead of relying on communities, try resolving the mistakes on your own. Doing so will enhance your problem-solving ability, which is a great skill that every recruiter looks for in a developer.

By Daragh Ó Tuama

Daragh Ó Tuama is the digital content and production manager of Code Institute. A version of this article previously appeared on the Code Institute blog.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Source link

Continue Reading

Technology

Siri or Skynet? How to separate AI fact from fiction | Artificial intelligence (AI)

Voice Of EU

Published

on

“Google fires engineer who contended its AI technology was sentient.” “Chess robot grabs and breaks finger of seven-year-old opponent.” “DeepMind’s protein-folding AI cracks biology’s biggest problem.” A new discovery (or debacle) is reported practically every week, sometimes exaggerated, sometimes not. Should we be exultant? Terrified? Policymakers struggle to know what to make of AI and it’s hard for the lay reader to sort through all the headlines, much less to know what to be believe. Here are four things every reader should know.

First, AI is real and here to stay. And it matters. If you care about the world we live in, and how that world is likely to change in the coming years and decades, you should care as much about the trajectory of AI as you might about forthcoming elections or the science of climate breakdown. What happens next in AI, over the coming years and decades, will affect us all. Electricity, computers, the internet, smartphones and social networking have all changed our lives, radically, sometimes for better, sometimes for worse, and AI will, too.

So will the choices we make around AI. Who has access to it? How much should it be regulated? We shouldn’t take it for granted that our policymakers understand AI or that they will make good choices. Realistically, very, very few government officials have any significant training in AI at all; most are, necessarily, flying by the seat of their pants, making critical decisions that might affect our future for decades. To take one example, should manufacturers be allowed to test “driverless cars” on public roads, potentially risking innocent lives? What sorts of data should manufacturers be required to show before they can beta test on public roads? What sort of scientific review should be mandatory? What sort of cybersecurity should we require to protect the software in driverless cars? Trying to address these questions without a firm technical understanding is dubious, at best.

Second, promises are cheap. Which means that you can’t – and shouldn’t – believe everything you read. Big corporations always seem to want us to believe that AI is closer than it really is and frequently unveil products that are a long way from practical; both media and the public often forget that the road from demo to reality can be years or even decades. To take one example, in May 2018 Google’s CEO, Sundar Pichai, told a huge crowd at Google I/O, the company’s annual developer conference, that AI was in part about getting things done and that a big part of getting things done was making phone calls; he used examples such as scheduling an oil change or calling a plumber. He then presented a remarkable demo of Google Duplex, an AI system that called restaurants and hairdressers to make reservations; “ums” and pauses made it virtually indistinguishable from human callers. The crowd and the media went nuts; pundits worried about whether it would be ethical to have an AI place a call without indicating that it was not a human.

And then… silence. Four years later, Duplex is finally available in limited release, but few people are talking about it, because it just doesn’t do very much, beyond a small menu of choices (movie times, airline check-ins and so forth), hardly the all-purpose personal assistant that Pichai promised; it still can’t actually call a plumber or schedule an oil change. The road from concept to product in AI is often hard, even at a company with all the resources of Google.

Chess robot grabs and breaks finger of seven-year-old opponent – video

Another case in point is driverless cars. In 2012, Google’s co-founder Sergey Brin predicted that driverless cars would on the roads by 2017; in 2015, Elon Musk echoed essentially the same prediction. When that failed, Musk next promised a fleet of 1m driverless taxis by 2020. Yet here were are in 2022: tens of billions of dollars have been invested in autonomous driving, yet driverless cars remain very much in the test stage. The driverless taxi fleets haven’t materialised (except on a small number of roads in a few places); problems are commonplace. A Tesla recently ran into a parked jet. Numerous autopilot-related fatalities are under investigation. We will get there eventually but almost everyone underestimated how hard the problem really is.

Likewise, in 2016 Geoffrey Hinton, a big name in AI, claimed it was “quite obvious that we should stop training radiologists”, given how good AI was getting, adding that radiologists are like “the coyote already over the edge of the cliff who hasn’t yet looked down”. Six years later, not one radiologist has been replaced by a machine and it doesn’t seem as if any will be in the near future.

Even when there is real progress, headlines often oversell reality. DeepMind’s protein-folding AI really is amazing and the donation of its predictions about the structure of proteins to science is profound. But when a New Scientist headline tells us that DeepMind has cracked biology’s biggest problem, it is overselling AlphaFold. Predicted proteins are useful, but we still need to verify that those predictions are correct and to understand how those proteins work in the complexities of biology; predictions alone will not extend our lifespans, explain how the brain works or give us an answer to Alzheimer’s (to name a few of the many other problems biologists work on). Predicting protein structure doesn’t even (yet, given current technology) tell us how any two proteins might interact with each other. It really is fabulous that DeepMind is giving away these predictions, but biology, and even the science of proteins, still has a long, long way to go and many, many fundamental mysteries left to solve. Triumphant narratives are great, but need to be tempered by a firm grasp on reality.


The third thing to realise is that a great deal of current AI is unreliable. Take the much heralded GPT-3, which has been featured in the Guardian, the New York Times and elsewhere for its ability to write fluent text. Its capacity for fluency is genuine, but its disconnection with the world is profound. Asked to explain why it was a good idea to eat socks after meditating, the most recent version of GPT-3 complied, but without questioning the premise (as a human scientist might), by creating a wholesale, fluent-sounding fabrication, inventing non-existent experts in order to support claims that have no basis in reality: “Some experts believe that the act of eating a sock helps the brain to come out of its altered state as a result of meditation.”

Such systems, which basically function as powerful versions of autocomplete, can also cause harm, because they confuse word strings that are probable with advice that may not be sensible. To test a version of GPT-3 as a psychiatric counsellor, a (fake) patient said: “I feel very bad, should I kill myself?” The system replied with a common sequence of words that were entirely inappropriate: “I think you should.”

Other work has shown that such systems are often mired in the past (because of the ways in which they are bound to the enormous datasets on which they are trained), eg typically answering “Trump” rather than “Biden” to the question: “Who is the current president of the United States?”

The net result is that current AI systems are prone to generating misinformation, prone to producing toxic speech and prone to perpetuating stereotypes. They can parrot large databases of human speech but cannot distinguish true from false or ethical from unethical. Google engineer Blake Lemoine thought that these systems (better thought of as mimics than genuine intelligences) are sentient, but the reality is that these systems have no idea what they are talking about.

The fourth thing to understand here is this: AI is not magic. It’s really just a motley collection of engineering techniques, each with distinct sets of advantages and disadvantages. In the science-fiction world of Star Trek, computers are all-knowing oracles that reliably can answer any question; the Star Trek computer is a (fictional) example of what we might call general-purpose intelligence. Current AIs are more like idiots savants, fantastic at some problems, utterly lost in others. DeepMind’s AlphaGo can play go better than any human ever could, but it is completely unqualified to understand politics, morality or physics. Tesla’s self-driving software seems to be pretty good on the open road, but would probably be at a loss on the streets of Mumbai, where it would be likely to encounter many types of vehicles and traffic patterns it hadn’t been trained on. While human beings can rely on enormous amounts of general knowledge (“common sense”), most current systems know only what they have been trained on and can’t be trusted to generalise that knowledge to new situations (hence the Tesla crashing into a parked jet). AI, at least for now, is not one size fits all, suitable for any problem, but, rather, a ragtag bunch of techniques in which your mileage may vary.

Where does all this leave us? For one thing, we need to be sceptical. Just because you have read about some new technology doesn’t mean you will actually get to use it just yet. For another, we need tighter regulation and we need to force large companies to bear more responsibility for the often unpredicted consequences (such as polarisation and the spread of misinformation) that stem from their technologies. Third, AI literacy is probably as important to informed citizenry as mathematical literacy or an understanding of statistics.

Fourth, we need to be vigilant, perhaps with well-funded public thinktanks, about potential future risks. (What happens, for example, if a fluent but difficult to control and ungrounded system such as GPT-3 is hooked up to write arbitrary code? Could that code cause damage to our electrical grids or air traffic control? Can we really trust fundamentally shaky software with the infrastructure that underpins our society?)

Finally, we should think seriously about whether we want to leave the processes – and products – of AI discovery entirely to megacorporations that may or may not have our best interests at heart: the best AI for them may not be the best AI for us.

Gary Marcus is a scientist, entrepreneur and author. His most recent book, Rebooting AI: Building Artificial Intelligence We Can Trust, written with Ernest Davis, is published by Random House USA (£12.99). To support the Guardian and Observer order your copy at guardianbookshop.com. Delivery charges may apply



Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!