Predictability is the new Nirvana in software development

Predictability is the new Nirvana in software development

Extract from IT-Online: https://it-online.co.za

A growing skills shortage in the local software development industry is forcing companies to find a lower risk approach to new development projects. Both the time and materials as well as the fixed costs methods of contracting leave businesses exposed to unacceptable risk, and a hybrid methodology is gaining traction, especially with businesses looking for more predictability.

“In the wake of a chronic brain drain and the resulting wage inflation, local companies are trying to avoid growing their teams at the moment and are closely scrutinising risk associated with new projects. Predictability is now the goal for most business leaders but providing certainty when it comes to scoping, costs and delivery timelines is almost impossible when using current methodologies. C-level executives are exceptionally wary of time and budget overruns, which have become commonplace in the IT industry and so a new approach is required,” explains Sergio Barbosa, CIO of enterprise software development house, Global Kinetic, and CEO of its open banking platform, FutureBank.

Overruns now the rule not the exception

In 2012 McKinsey and Oxford University reported that more than half of large IT projects had overrun their defined budgets by more than 45%. A decade later the firm updated its findings, and things had only gotten worse. The 2022 findings showed that just one in 200 projects reviewed had delivered the intended benefits on time and within budget. What's more, the reviewed IT projects overall had exceeded their budgets by an eye watering 75%, had overrun schedules by 46%, and had generated 39% less value than originally predicted.

The financial impact of these overruns is staggering. According to the Consortium for Information & Software Quality (CISQ), the cost of unsuccessful development projects, reached $260 billion in 2020, which represented a 46% increase since the previous estimate two years before.

“Every industry is filled with examples of how the usual time and materials or fixed costs methods are simply not cutting it. Who hasn’t heard of the 1000 percent cost and 14 year time overrun of the James Webb telescope? In a contracting economy the risk of clinging to inflexible methods is adding pressure. A new construct is needed to ensure that projects aren’t put on hold to appease board requirements for a lower-risk growth strategy,” Barbosa says.

In the local IT space, the fixed costs approach, while still widely embraced, is often a cause for relationship breakdowns.

“Scope will naturally evolve over time in relation to changing market conditions, user behaviour and innovation. Large, fixed cost projects can no longer be seen to be a viable option within the current technology landscape where responsiveness is key to gaining competitive advantage. Change is inevitable and the cost and overhead of managing that change in fixed cost projects is what creates friction that leaves both customers and employees unhappy. Unfortunately, many customers still lean toward a fixed cost approach as it provides more tangible guarantees in terms of managing cost, even though in reality they do little to guarantee return on technology investment,” shares Lorén Rose, COO at Global Kinetic.

What’s more, Rose says when clients insist on outdated methodologies, the first casualty in a project is often quality.

“Customers will often choose to compromise on quality in an effort to get to market faster, or at a lower cost, so initially most projects appear to be tracking well to time and budgets. However, the technical debt eventually accumulates, until the bulk of the development team’s efforts are spent on bug fixing and troubleshooting. This slows down the team’s ability to innovate and respond to change and leaves very little time to make traction on value adding features. Without quality there is no predictability.”

An 80s fusion delivers a thoroughly modern solution

Barbosa explains that when it comes to analysing projects, the function point analysis used in the 1980s is once again immensely relevant.

“Function point estimates may not have been ideal when software was built using the waterfall methodology. However, in a world of agile development, function point estimations give the customer a clear sense of what they will be getting within a defined time frame. The expectation is managed by the customer, and they are able to prioritise the most important delivery.

“The project will almost never go over budget because you're always managing the expectation and with managed teams as a service, customers have a fixed monthly cost and fixed delivery timeline. We use techniques and tools to constantly calibrate what we are delivering based on budget and time. And with a hybrid of fixed costs and time and materials we are able to deliver the best of both methods in a way that massively reduces risk and eliminates overruns,” he explains.

A costly disconnect

One of the reasons why the more commonly used methodologies don’t work is that the developers and the clients are approaching the project with polar opposite attitudes.

“Developers can sometimes be too optimistic with their estimates. Once they have solved the problem in their minds, they believe it will be quick and easy to codify it into a software solution. Clients, on the other hand, often have an over optimistic view of the scope, but ignore the foundational features and elements needed to support the main features of a solution. Some of them have been bitten by a bad experience, or are simply just exceedingly risk averse. Combine an over optimistic view of scope and complexity with a fixed cost and you are in a lose-lose situation,” Barbosa explains.

Barbosa says that at the outset development partners must take the time required to conduct proper discovery. All parties are in a rush to get coding, but by properly defining the problem statement, parties can avoid building the wrong products. What’s more, he says that not all development needs to be a greenfield build, saying that making use of existing collateral and opting for a brownfield build could deliver faster and at a lower risk.

“In order to deliver the predictability that today’s business leaders are desperately looking for, software development providers must proactively work with their teams and clients. There must be proven and bullet-proof processes guiding decision making; there must be clear transparent quality standards and processes with accountable individuals ensuring they are met; and an environment that facilitates frequent incremental changes, based on predefined goals. Taking the best from old and new methodologies and creating a hybrid one that acknowledges the realities, hopes and concerns of both the client and the developers will result in a predictable environment that is best placed to deliver on all expectations,” Barbosa sums up.

Original Source: Predictability the new Nirvana in software development 24/05/23 09:33 | | it-online.co.za

Visit https://it-online.co.za/ for more tech news stories.

What makes a great API?

What makes a great API?

Global Kinetic Discovery Team


API design has more to do with user interface design than programming.

— Arnaud Lauret, The Design of Web APIs

 

From the wheel to the modernist skyscraper to the iPhone, the best designs are easy to grasp, simple to use, and never limited to a single use case… possibly excluding the toaster.

A couple of weeks ago, we wrote about the need to productize application programming interfaces (APIs). Today, we look at three related characteristics of a great API product: accessibility, simplicity, and standardization.  

Reliability and security are both non-negotiable too, but since our focus in this blog series is on API productization, you’ll forgive us if we concentrate on user experience instead of performance. SmartBear’s annual survey of the API community has found that API developers consistently rate performance as the top measure of an API’s success, while API consumers overwhelmingly choose ease of use, followed by accurate documentation.

Accessibility is an important characteristic of great APIs

How do we shorten the technical gap analysis phase for the prospective user and as such fast-track the buying decision?

Global Kinetic Discovery Team

 

Time is a critical factor in driving adoption and use of any API. Specifically, the time it takes for the prospective user to get their heads around the product: to experiment, understand, and find some early success. Some have put forward Time to First Call as an important key performance indicator.

 Malcolm Gladwell said it takes 10,000 hours of practice for just about anyone to master anything, but he wasn’t thinking about your API. Great APIs are intuitive and easy to use. Prospective customers don’t have to complete hours of training or read pages of technical documentation to get started. If they do, if it takes too long to get there (three minutes by some estimates!), they’ll probably move on. 

 Only the very largest organizations can afford to think that just by building it, users will come – or that, once they’re there, they’ll stay. There are 24,689 APIs listed on ProgrammableWeb’s API Directory. That’s a lot of competition. 

Start with documentation

How do you help prospective users get to know your API? Concise, well organized, and up to date documentation remains essential. This is one of the first places developers look to understand what a particular API does, whether or not it will meet their needs, and what the necessary inputs are. And users won’t appreciate formats that are difficult to search, bookmark, share, or copy and paste (sorry, that’s you, Mr PDF). 

 They’ll want to see typical usage scenarios, a list of available methods and accepted parameters, as well as code examples for all product features. As we mentioned above, accurate and detailed documentation follows only ease of use as the most important characteristic of an API in users’ minds. 

 For its 2021 State of Software Quality: API, SmartBear asked what their top five most important things were in API documentation itself. Examples were chosen by 65 percent of respondents, followed by status and errors (55 percent), authentications (54 percent), HTTP requests (47 percent), and parameters (46 percent). 

 Documentation should be approached as an integral part of the offering and product development lifecycle. There are many tools to generate accurate documentation in line with development. Global Kinetic’s API designers use Swagger together with our in-house documentation standards and style guide to produce consistently readable, navigable material.

Where possible, show rather than tell

How do we enable self-service integration and minimize the cost / time to revenue when onboarding new customers?

Global Kinetic Discovery Team

 

You know what though? Most developers will want to jump right in, and you want to make that as easy as possible for them to do. Just as good documentation rests on excellent structure, so do APIs benefit from well-considered design that adheres to predetermined design specifications and guidelines and conforms to widely adopted industry standards.

 Design-oriented developers aim to take maximum advantage of affordance – a term that shifts in meaning between disciplines, authors, and decades but is used in the design world for our perception of something’s possible use based on its perceived and actual properties. These are the clues designers put down to aid the discoverability of an object or element of an interface. In other words, to reduce the time it takes to get to know the product.

 “When affordances are taken advantage of, the user knows what to do just by looking: no picture, label, or instruction needed,” says Donald A. Norman in The Psychology of Everyday Things. Trails of clues like these ensure that APIs are easy to navigate simply by virtue of their design. But to craft an experience like that – to develop an API that is as independently consumable as possible – takes care and a deep understanding of the prospective customer’s context: their capabilities, experience, and goals. 

 If you’ve spent time gaining that knowledge before coding, your API will likely provide a superior user experience – and offer better performance, scalability, and security too. That’s why Global Kinetic takes a strictly design-first approach to API development.

Which brings us to the API sandbox

How do we ensure approachability through the design and implementation of developer portals with Day One access to a use-case–driven, full lifecycle developer sandbox?

Global Kinetic Discovery Team

 

API sandboxes emulate the behavior of production APIs. More than just demos, these testing environments are like a free trial and an essential part of any API product strategy. As any of the big names in e-commerce will tell you, a try-before-you-buy sales and marketing strategy helps build trust and ultimately drives sales. Allowing prospective customers to test your APIs before making a purchase, without risk or financial outlay, has the same effect.  

 Sandboxes’ benefits extend beyond onboarding too. Developers can continue to test integrations without the additional cost of “live” API requests and support calls, or the frustration of potential blocking/throttling of their API requests. At a deeper level, the independence that a sandbox affords them to experiment helps speed innovation and project progress.

The pursuit of simplicity is an element of all great APIs

It is important to emphasize the value of simplicity and elegance, for complexity has a way of compounding difficulties and as we have seen, creating mistakes. My definition of elegance is the achievement of a given functionality with a minimum of mechanism and a maximum of clarity.

— Fernando J. Corbató, “On Building Systems That Will Fail”, 1990 Turing Award presentation

 Your customer has a problem to solve and it shouldn’t be your API. They probably don’t have time to diligently explore its delicate and manifold mysteries, and there is no real reason that they should. It can be tempting to add more of everything in pursuit of greater utility, but users are frequently more appreciative of a pared down experience. That doesn’t mean you’ve stripped out useful functionality, but that you have consciously worked from the definition stage onwards to prune away the unnecessary and mask complexity.

 Ronnie Mistra, a co-founder of the API Academy and senior director of technology at Publicis Sapient, says that the API designer’s job is to manage complexity – and specifically to improve learnability, boost usability, and reduce confusion. It isn’t easy, and it again requires a laser sharp focus on the user’s problem. Donald A. Norman again:

“Complexity can be tamed, but it requires considerable effort to do it well. Decreasing the number of buttons and displays is not the solution. The solution is to understand the total system, to design it in a way that allows all the pieces fit nicely together, so that initial learning as well as usage are both optimal. Years ago, Larry Tesler, then a vice president of Apple, argued that the total complexity of a system is a constant: as you make the person's interaction simpler, the hidden complexity behind the scenes increases. Make one part of the system simpler, said Tesler, and the rest of the system gets more complex. This principle is known today as ‘Tesler’s law of the conservation of complexity’. Tesler described it as a tradeoff: making things easier for the user means making it more difficult for the designer or engineer.”

― Donald A. Norman, Living with Complexity

 Interrogate the need for anything. Cut the required number of user actions to the minimum. (“Try very hard to delete the part or process,” as Elon Musk has said.) Strive for modularity and composability. Automate what you can. Ensure that only essential data are exchanged. Patterns help make sense of complexity, and hence…

Great APIs are internally consistent and follow industry standards

A sure-fire way to turn user developers off is to diverge from industry best practices. Respondents to SmartBear’s latest survey gave standardization as the most important technology challenge in the API space (52 percent to security’s 40 percent and scalability’s 36 percent) – and previous years’ results suggest the sense of urgency in this regard is growing.

 System- and ecosystem-wide standardization and internal consistency make a huge difference in efficiency, security, and interoperability. They vastly improve the onboarding process and save time in development. We are all, by now, pretty good at finding our way around new user interfaces. They follow predictable patterns and use concepts and design elements with which we have become familiar. Ensuring that your API has a similarly predictable structure smooths the user journey. Consistency also goes a long way to ensuring backward compatibility when you update your API.

 Too few API portfolios demonstrate a fully consistent approach to naming, error messages, and code patterns. Many do not follow widely recognized global standards. Quality can be poor. There are reasons for this, many of them understandable. We discussed some a couple of weeks ago. In Europe, part of the problem is also that financial institutions – some of the leading users of APIs  – saw little value in developing quality APIs in the lead-up to PSD2. The regulation did not seem to most institutions to be in their commercial interests. (There was also no EU standard for how they should look, something that PSD3 is set to address.)

 That has changed, of course. Banks and other organizations of all shapes and sizes have come to recognize the benefits of data sharing and have launched API portals to manage API product development, security, as well as marketing. Many of them have worked with specialists to fast-track API productization, so that they can meet market needs faster and more reliably than they might have alone. 

 

Looking to build out your API strategy? Contact Global Kinetic – we can do this for you.

Sergio Barbosa is quoted in The Banking Revolution article from ITWeb's Brainstorm - May 2022

By Matthew Burbidge


Sergio Barbosa (CEO, FutureBank) is quoted in The Banking Revolution article in the May 2022 edition of ITWeb's Brainstorm publication, see the section on Mainframes and Core Banking below.


To read more from this Brainstorm May 2022 issue, please visit: https://brainstorm.itweb.co.za/content/xnklOvz1N4Kq4Ymz to subscribe.

Connect with Global Kinetic this month and next. Here’s how!


Global Kinetic’s racking up air miles this May and June flying between four big events in the fintech calendar. Here’s a heads up on what’s going down.

If you’re going to San Francisco…

FinovateSpring fintech event

First up, it’s Martin Dippenaar and Sergio Barbosa at Finovate Spring (San Francisco, 18–20 May). Our CEO and CIO are no strangers to Finovate, but it’s been a while since they were able to join their fintech peers, financial institution executives, industry analysts, and VCs at this demo-centered conference and expo. 

We know Sergio will make a beeline for the "executive reboot" for chief innovation/transformation officers, which will center on tech innovation at regional and independent community banks and the role that fintech partnerships and BaaS can play. There’s also a discussion about how community banks and credit unions can compete in a digital world, which is something he’s written about more than once. Friday may find him at a panel discussion about strategic partnerships driving digital transformation.

Martin’s sure to catch at least one of Thursday’s panel discussions on fintech markets, investment trends, and future focus, and may be tempted by technology futurist Ian Khan painting a picture of the next 25 years in fintech. That’s a seriously big crystal ball!

And wouldn’t anyone want to hear Starling Bank founder Anne Boden describe in just 15 minutes "Why Digital Transformation Projects Generally Fail – And How To Make Sure Yours Doesn’t" or listen to Sam Kilmer at Cornerstone Advisors explain "How Embedded Finance Can Generate Over $100 Billion in Revenue for Banks" just as quickly. What, no time for questions?

We’re also curious to hear from Martin and Sergio what they thought of product demos by Agent IQ (digital engagement), Axway (API management), DocFox (account onboarding), eBankIT (omnichannel banking), Finicity (data aggregation), Finzly (open banking), Identomat (KYC and identity verification), and Skyflow (data privacy). Take notes, guys! Live tweet us, maybe? 

Or speed dating in New York

LendIt - Crunchbase Company Profile & Funding

After spending time at our Palo Alto office, Martin and Sergio will land at LendIt Fintech (New York City, 25–26 May). Sergio has described this event as great for networking with its business speed-dating–like functions. 

LendIt’s agenda is still being finalized, but you’ll find a lot of potential if you sift through the TBDs. Compared to Finovate, there is less on the fintech industry itself but more on crypto, still more on fintechs serving the underbanked, and a lot more on government policy and regulation.

Sessions are a traditional length, so you’d want to pick them carefully. We’d rate Martin and Sergio’s chances Good to Very Good with most of the Embedded Finance stream, especially “Embedded Finance, New Technology Stacks and the Future of Consumer Banking” and “What Security Risks and Fraud Vectors Follow the Embedded Shift?”.

If their speed dating schedule allows, they might also be able to fit in sessions like “Building a Neobank from Scratch: Foundation, Core, and Vendors”, “Bridging Traditional and Decentralized Finance to Unlock the Most Value for Consumers”, and “The Secrets of a Successful Fintech Partnership”. 

Shopping in Dubai, maybe?

Seamless MENA 2022 - 2022-05-31 - Crunchbase Event Profile

Then onto Seamless Middle East (Dubai, 31 May – 1 June), where Global Kinetic’s FutureBank open banking platform has a stand and a 15-minute demo slot. Sergio will be joined there by Cyprus-based Dan Meyer, who heads Global Kinetic’s new business development in the Middle East and Europe. 

Looking at the agenda and focusing on fintech, financial services, and payments, topics that recur very frequently this year are biometrics; digital identity and eKYC; the payments user experience; and, most strikingly, the cashless society and financial inclusivity and literacy.

By this stage in his intensive three-week tech trek, you’d forgive Sergio if he sat out most presentations, dallied over breakfast with a business partner, ran an extra kilometer at the hotel gym, or stared blankly at a bottle of Fiji Water / Christiane Amanpour / a gold tap. There’s a lot to see, but you can watch the recordings back at home.

In any case, the action will be at the FutureBank stand (H62), where the guys will be on hand to explain how this remarkable platform and fintech marketplace works.

Catching up in Amsterdam

Money 20/20 Europe - 2021-09-21 - Crunchbase Event Profile

See, this is what happens when you sit conferences out for a while. This year, Money20/20 Europe (7–9 June) has seen fit to set up a Sex & Drugs & Rock’n’ Roll Club at the RAI Amsterdam Convention Centre. Still mostly TBD at the time of writing, listed sessions cover CBD oil and Satoshi Nakamoto’s belly

That sounds like one hell of a bad trip. But Dan Meyer will brave the corridors nonetheless. Global Kinetic is a system integration partner to the card issuer and processor Paymentology, which is an event sponsor, so you’ll be certain to find Dan at its stand, at least some of the time.

What’s hot at Money20/20 that isn’t cannabis-derived? Crypto, with sessions delving into Web3, DLT, DeFi, SSI, NFTs, the metaverse, and whether or not it’s all hype. Other themes are embedded finance, open banking, European digital identity, European payments infrastructure, SMEs, bank–fintech partnerships, sustainability (or the lack of it) in financial services, and something JP Morgan calls “consumerism”.

Also Anne Boden. Is she ever in the office.

 

Martin, Sergio, and Dan would love to meet up for a coffee / beer / Fiji Water at any of these events. Give them a call, pop them an email, or message them on LinkedIn. 

Has Web3 got its priorities wrong?


Co-founder and CIO of enterprise software development house, Global Kinetic, Sergio directly heads its open banking platform, FutureBank. A skilled software engineer, innovative product developer, and keen business strategist, he has participated in several notable fintech milestones, including building the southern hemisphere’s first digital-only bank all the way back in 2002.

Surveying the sorry state of consumer privacy a couple of years ago, Alan Rusbridger hypothesized a privacy “techlash” in the Guardian. In it, he nodded to a Washington Post tech journalist’s description of us “gleefully carrying surveillance machines in our pockets”, but he wasn’t calling on us to throw our phones into the Thames just yet. He felt encouraged by developments like edge computing, encryption, and blockchain:

“One estimate is that there may be 200 or 300 startups, SMEs and entrepreneurs rethinking the ownership and value of data. Finland’s MyData project is just one high-profile attempt to let individuals regain control of their own data. Other players are exploring how blockchain can strengthen privacy as a basic consumer right. The jury is out – and doubtless will be for a while yet.”

Yes and no. It’s two years later – we’ve seen an explosion in use of Signal, DuckDuckGo, DeFi, and NFTs – but the jury’s still hotly debating that exact question: the role of blockchain in protecting PII.

Enter witness for the prosecution Moxy Marlinspike.

For those who don’t know, Moxy Marlinspike is a highly respected cryptographer and digital security specialist, a former head of security at Twitter and the founder of Signal, the privacy-optimized answer to WhatsApp. In January, Marlinspike wrote a blog post on his impressions of Web3 in its current state and his thoughts about where it would go.

Given his high profile, technical expertise, and articulate, deliberative style of communication, the post was always going to draw readers from the techie scene. His negative assessment, relying in part on his own eye-raising, real-world experiences, meant it got a lot more attention than that. It seems anyone and everyone has said something on the piece – now, me included.

Read Sergio’s earlier blog post on Web3 here.

Web3 and the problem with servers

Web3’s idealists hope that by jumping the shiny tracks laid by the Big Tech companies, we will snatch back our privacy and reestablish personal autonomy and control within decentralized networks of computers owned by, well, just about anyone. But, in his post, Marlinspike points to a flaw in the plan:

“When people talk about blockchains, they talk about distributed trust, leaderless consensus, and all the mechanics of how that works, but often gloss over the reality that clients ultimately can’t participate in those mechanics. All the network diagrams are of servers, the trust model is between servers, everything is about servers. Blockchains are designed to be a network of peers, but not designed such that it’s really possible for your mobile device or your browser to be one of those peers.”

Servers are everywhere but in consumers’ hands. Since the average Joe or Jane only has clients (browsers and mobile devices) at their fingertips, their access to the system must be mediated by third-party–owned services provided through servers called nodes. “They’re like the gateway to the blockchain realm” says QuikNode, a provider.

Gateway or gatekeeper? Jack Dorsey and the Bitcoiners believe that already powerful crypto ventures have made accommodations for the sake of speed and functionality, which has weakened security and consolidated power in only a few hands – by making setting up and running independent nodes difficult, for instance. The benefits of the blockchain are being wasted in attempts to kickstart new network effects and maximize profits for VCs and early adopters, they say.

Marlinspike may or may not agree – he’s playing philosopher king or elder statesman to Dorsey’s freedom fighter here. It’s just that he doesn’t see control of nodes as the problem per se. He’s adamant that no-one – not even “nerds” – wants to run their own servers and it’s by ignoring that fact that we risk repeating history: “To make these technologies usable, the space is consolidating around… platforms. Again. People who will run servers for you, and iterate on the new functionality that emerges. Infura, OpenSea, Coinbase, Etherscan.”

He makes the case for a re-do:

“We should accept the premise that people will not run their own servers by designing systems that can distribute trust without having to distribute infrastructure. This means architecture that anticipates and accepts the inevitable outcome of relatively centralized client/server relationships, but uses cryptography (rather than infrastructure) to distribute trust.”

He believes this will help prevent Web3’s platformication, something that is already well underway. At present, OpenSea has around 95% of the global NFT trading market cornered, with volumes 12 times its closest rival. Ethereum had a similar stranglehold on decentralized finance at the start of 2021 but has lost share as it struggles to scale. Infura and Alchemy control almost all of the market for node services. Coinbase has over half of bitcoin trading wrapped up. It’s no surprise that Coinbase didn’t make a splash at Bitcoin 2022 this year, the biggest crypto event in the world. There’s no need.

OpenSea has around 95% of the global NFT trading market cornered, with volumes 12 times its closest rival. Ethereum had a similar stranglehold on decentralized finance at the start of 2021 but has lost share as it struggles to scale. Infura and Alchemy control almost all of the market for node services. Coinbase has over half of bitcoin trading wrapped up.

Defenders of the evolving crypto ecosystem say that there are more and better alternatives to these providers popping up all the time, but that’s missing the point. As CoinDesk reporter Will Gottsegen wrote in October last year in relation to NFTs: “Decentralized computing doesn’t necessitate a decentralized market structure.”

It’s Wild West stuff, this

Published a month ago and with 5.5 million views and counting, Dan Olson’s YouTube demolition job “Line Goes Up – The Problem With NFTs” might put him up there with Marlinspike in the rankings of influential cryptocynics. It’s “viral”, if something over 2 and a quarter hours long can be called that. Discussing the video, Casey Newton at Platformer wrote :

“[I]t’s undeniable that today web3 is a mess — and not just in a ‘we haven’t finished building it’ sort of way. Web3 is a mess of a kind that it could take five or more years to fix, and that assumes the work gets started soon. And the thing is … I’m just not sure people are working on these things.”

Like, what things? Well, privacy and security. “It’s hard to imagine a bigger hurdle to the mass adoption of blockchain technologies than the absence of basic trust and safety features, and yet to date, we’ve seen very little,” says Newton, suggesting that few crypto insiders really care enough to prioritize solutions.

When Time asked economist, crypto investor, and Twitter influencer Tascha Che to answer Olson’s charge that aspects of blockchain technology encouraged fraud, she replied that blockchain was no more secure than centralized databases: “The point of the system is a revolution in how we distribute value. The point is not inventing a system that is more secure than the centralized system.”

Security – and particularly fraud prevention – ought to be hard-baked into a system like Web3 where transactions are irrevocable. It needs mechanisms to ensure only legitimate transactions take place.

I’m not sure that’s something you want to put in the brochure. Security – and particularly fraud prevention – ought to be hard-baked into the Web3 world where transactions are irrevocable. It needs mechanisms to ensure only legitimate transactions take place. There isn’t anything like this currently (apart from Bitcoin itself, of course).

Remember the businesses running nodes to which consumer-side clients must connect in order to access the blockchain and use Web3 applications? On a “zero-trust” system, their word is taken for gospel, for no other reason than that Web3 apps almost never authenticate the information they pass to and from the blockchain. Marlinspike blogged:

“These client APIs are not using anything to verify blockchain state or the authenticity of responses. The results aren’t even signed. [...] So much work, energy, and time has gone into creating a trustless distributed consensus mechanism, but virtually all clients that wish to access it do so by simply trusting the outputs from these two companies [Infura and Alchemy] without any further verification.”

These apps aren’t using even the most basic security best practices, and it’s the same for wallets, the actual stores of value, because they’re clients too. Information may have been tampered with; it may not even be coming from where it should. You wouldn’t know.

Web3 is still small and dominated by relatively few companies. It does seem odd that they haven’t yet taken the opportunity to address a matter so central to its future success: security.

Finding alignment on large-scale security issues among many stakeholders is a challenge at the best of times; in a decentralized system, it can seem impossible. But Web3 is still small and dominated by relatively few companies. It does seem odd that they haven’t yet taken the opportunity to address a matter so central to its future success. In the minds of consumers, FOMO doesn’t apply to being hacked. An app isn’t going to replace the need for collaboration.

Safety first

Despite a decade of work and the enormous amounts of money being thrown at it, Web3 remains an insecure if not dangerous place for the initiated and uninitiated alike. Marlinspike is one of many who have made the point, and it is arguable whether its vulnerability to recentralization is a bigger threat to adoption than that.

A look into the Web3 job jar

For financial institutions exploring Web3, it certainly does look like the next version of the Internet – soon to enter its tweens – has a lot of growing up to do.

Most banks and credit unions will act towards Web3 as prudently as they always have; I probably don’t need to advise them to make any investments in technology very carefully. Similarly, I don’t have to remind them that it isn’t necessary to risk it and build it themselves. Platforms like FutureBank can provide them a highly-secure native integration to the freewheeling new world of opportunity and set them up fast to take advantage of fast maturing use cases like embedded finance.

Wondering about doing business on Web3?

Contact Global Kinetic for our assessment of the risks and rewards.

Web3: Tell me if you’ve heard this one before?


Co-founder and CIO of enterprise software development house, Global Kinetic, Sergio directly heads its open banking platform, FutureBank. A skilled software engineer, innovative product developer, and keen business strategist, he has participated in several notable fintech milestones, including building the southern hemisphere’s first digital-only bank all the way back in 2002.

“On the Internet, nobody knows you’re a dog.” The New Yorker is famous for its clever cartoons – and none has summed up a pivotal moment in society so perfectly as that one, long ago in July 1993. It was an Internet meme before the Internet had memes.

What the drawing captured was a sense of the utopian potential of the then WIP WWW, particularly the way it might liberate users from the straight jackets of their real-world identities. The promise of reinvention, free from control, was backed by the technology itself, which seemed to guarantee a degree of anonymity over a highly decentralized and chaotic network of computers.

Sadly, things started looking less rosy for our canine friends soon after. The HTML cookie made its appearance little over a year after the cartoon was published. Developments snowballed, for better and worse. What the Internet came to lose in anonymity, trust and civility, it gained in utility, efficiency and convenience.

Money changes everything

The transition from the decentralized Web 1.0 to the centralized Web 2.0 was inevitable. What military comms operators, scientists and researchers had put up with in ARPANET and Gopher was never going to appeal to the mass market, as exciting as the Internet sounded. Going online didn’t take a university degree exactly, but it wasn’t a walk in the park either.

Slowly, new tech players competed to ease consumers’ access to email and the WWW, bundling the software and hardware they needed in ever more user-friendly and affordable packages. The market consolidated, conferring on winners like Microsoft, Apple, Amazon, and, yes, AOL and Yahoo, a kind of omnipresence.

On the other side of the modem, as more people came online, the incentive for every kind of business to set up shop grew, raising the commercial stakes and setting off a new gold rush, this time for consumer data (not the red herring that was the dotcom bomb), the profit from which has powered the rise and eventual omnipotence of Google, Facebook, and their Chinese equivalents.

Let’s try this again, shall we

Different dog now. A few months ago, someone paid $450,000 for a plot adjoining land owned by Snoop Dogg in the Snoopverse, a virtual world built on The Sandbox platform. Katt Benedict, director of open finance at MX, commented on the news reported by Ron Shevlin on LinkedIn: “Conceptually, a metaverse could have been an opportunity to explore a post-Hunger Games new world. A world that has no concept of financial exclusion and class distinctions.”

It's sad to see how fast this thing called the metaverse has come to resemble our own money warped reality. In the same way that “rich digital experiences” today invariably involve making payments, the vision of the future metaverse you read about most is a kind of 3-D virtual shopping.

Scooting around the Internet, you’ll find enough idealism and hopeful exuberance attached to Web3 to power a Segway to Mars.

What’s this have to do with Web3, the much hyped new incarnation of the digital world we increasingly call home? 1 New technologies have frequently been hyped as game changers only to disappoint early adopters. Scooting around the Internet (the 2.0 version), you’ll find enough idealism and hope attached to Web3 to power a Segway to Mars: Web3 and/or crypto will end censorship, state surveillance and repression; reduce fraud and corruption; counter inflation, smooth access to capital, alleviate poverty and financial exclusion and solve the problem of developing world landlessness – if they don’t actually usher in a post-scarcity economy2.  The Mozilla Foundation predicts a dystopian future without web decentralization. Gavin Wood, co-founder of Ethereum and the man who coined the word Web3, believes it’s the only means of saving liberal democracy. Jack Dorsey has said he hopes Bitcoin will bring us world peace.


It really does sound wonderful, doesn’t it? Kind of like Sweden.

A luta continua!

As it stands now, Web2 is dominated by a few very large and extremely well resourced companies that exert disproportionate control over users – aka consumers and citizens. They are sustained by enormous profits derived from the same users’ personal data and content.

Harvard Business School professor Shoshona Zuboff explains in her 2019 book The Age of Surveillance Capitalism:

“Surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioral data. Although some of these data are applied to service improvement, the rest are declared as a proprietary behavioral surplus, fed into advanced manufacturing processes known as ‘machine intelligence’, and fabricated into prediction products that anticipate what you will do now, soon, and later. Finally, these prediction products are traded in a new kind of marketplace that I call behavioral futures markets. Surveillance capitalists have grown immensely wealthy from these trading operations, for many companies are willing to lay bets on our future behavior.”

Tim O’Reilly, whose definition of Web 2.0 is still the most widely used, has always been careful not to demonize Facebook and Google.3  But even he is ringing the alarm:

“When companies are using the data they collect for our benefit, it's a great deal. When companies are using it to manipulate us, or to direct us in a way that hurts us, or that enhances their market power at the expense of competitors who might provide us better value, then they're harming us with our data.”

A Web3 utopia beckons. Or does it?

Web3’s backers, some of whom made billions investing in Facebook, say it will fix the personal data problem for good. Chris Dixon, a partner at Andreesen Horowitz, describes it as a combination of Web2’s rich functionality and the “decentralized, community-governed ethos of Web1”. He says that “this means people can become participants and shareholders, not just customers or products. Web3 is the internet owned by the builders and users, orchestrated with tokens.”

Just as with the first decentralized Internet, the technology underlying Web3 can’t be co-opted by reactionary forces, or so the line goes. No-one owns the blockchain; it’s shared. You maintain control over not only your personal data but any aspect of your digital life. Content creators – i.e. everyone – can monetize their every mundane  unique thought, action, and virtual creation. “It means that all the value that’s created can be shared amongst more people, rather than just the owners, investors and employees,” says Esther Crawford at Twitter.

Sounding less like Sweden now, more like a 1960s kibbutz.

Oddly enough, I’ve managed not to mention Moxy Marlinspike in this post. Next week, I’ll wade into the debate over weaknesses in the crypto system that he fears will result in rapid recentralization of Web3, or, as he darkly suggests it may end up: Web2x2 – “web2 but with even less privacy”.

Notes

  1. As used here, Web3 is distinct from Web 3.0. The former – the subject of this post – is a vision of a blockchain-powered decentralized web. The latter is associated, closely or not, with the Semantic Web, an on-going effort led by Tim Berners-Lee and the W3C to make the data on the web more directly meaningful to machines, so that they can use it to make decisions independently of people.

  2. In this context, crypto does not refer to digital assets like Bitcoin or NFTs but to the global blockchain-powered infrastructure enabling them, as well as innovations like decentralized finance, decentralized autonomous organizations, and self-sovereign identity. Crypto’s close integration with standard web technologies is a precondition for a fully realized Web3.

  3. Web 2.0 has been cast primarily as “participatory” by its boosters, differentiating it from the static, passively consumed formats of Web 1.0. Cynics, among them many older techies, tend to follow Tim O’Reilly’s definition of Web 2.0 as “the network as platform”, in contrast to Web 1.0’s decentralized architecture. Some of the latter camp regard the active collaboration and content generation so characteristic of Web 2.0 as a natural development of Web 1.0 technologies, questioning the need for a new version number.