The Final Pillar: Designing for Cost


For the OCD readers of this post, defining cost as a pillar of good architecture seems like a poor choice of words as the picture below illustrates.

Cost is the most important aspect of any good architecture because it is cross-cutting across the other 4 pillars.  If you have enough money, you can build anything, but this is not always a good thing.  You can end up not making conscious decisions about where you spend your money and can end up over-engineering a solution to its detriment.  Most projects (and hence architectures) will almost certainly be constrained by budgets.  However, cost not only acts as a constraint on the architecture, conversely it can also help reinforce any of the other 4 pillars.

So how does one design for cost?

The most logical approach would be to apportion cost to each of the other 4 pillars equally and start from there.  But that may put you in a situation where you have overinvested in a certain pillar that is not a priority for the target solution.  For example, if you were building a solution that allows people to share cool cat photos you may not want to invest as heavily in the Security pillar as you would say in the Availability and Recoverability pillar.  On the other hand, if you were building a financial system you would want to invest heavily in the Security Pillar.

The key to determining this balance is by looking at the solution’s Non-functional Requirements.  At Global Kinetic, we group our NFRs into the following categories.

Some of the NFR categories above span multiple architectural pillars, like for example Operational Excellence spans the Performance and Scalability Pillar, the Availability and Recoverability Pillar and the Efficiency of Operations Pillar.  On the other hand, Security NFRs will be focused on the Security Pillar.

Another factor to consider is the stage at which your solution or product is in terms of the market or user base that the solution is targeting.  For example, if you are an early-stage startup, you may want to invest more heavily in Efficiency of Operations which will allow you to pivot and be more nimble in order to respond to changing customer/user demands while you get your product market ready, and invest more heavily later on in Performance and Scalability once your product or solution starts getting traction and the feature set starts to mature.

The most important thing to ensure is that you apportion at least some budget into each of the four pillars at the start so that you are making a conscious decision of which pillar has the highest priority and which has the lowest.  This budget and its apportionment to the 4 pillars must either meet the solution’s NFRs or come with a roadmap or plan of how meet the solution’s NFRs for the system over time.

And there you have it, the Five Pillars of good solution Architecture.  A final note is that a good architecture is one that can meet the demands of the product and market throughout the life cycle of the product without requiring a major redesign.  This means that a good architecture comes with a roadmap of when you will build each part of the architecture out to its full capability, because you can’t build everything up front and on day one.  I hope the blog series has been helpful, and if you need some help working through an architectural design for a new digital product that you are looking to take to market, you know where we are.

Building things right and building the right things.


The Sinclair C5 was a three-wheeled electric vehicle designed to be an alternative to traditional cars and bicycles. It was lightweight, compact, and had a top speed of 15 miles per hour.

Sinclair was confident that the C5 would be a huge success, and he predicted that it would revolutionize transportation in the UK. However, the product was plagued by quality issues and design flaws that made it impractical and unsafe to use.

The C5's low profile and lack of visibility made it difficult for other vehicles to see, and its small size and low speed made it vulnerable to accidents. The vehicle's battery life was also limited, and it struggled to handle hills and inclines.

Finally, the quality of the vehicle's construction was also an issue. The C5 was made of lightweight plastic, which was prone to cracking and breaking. The battery compartment was also poorly designed, and many units experienced problems with the battery leaking or overheating.

All these quality issues contributed to the failure of the Sinclair C5. Despite being a unique and innovative product, it was ultimately unable to overcome the practical challenges and safety concerns that plagued it.

We will never know if the Sinclair C5 would have ended up revolutionising transportation in the UK because it wasn’t built right.  But if you are building things right from the start, how do you know you are building the right thing?

Building the right thing.

In 2012, Airbnb was struggling to gain traction and growth. The company realized that they needed to improve the user experience on their platform to make it easier for people to find and book accommodations.

To address this challenge, Airbnb turned to Design Sprints to quickly test and iterate new ideas. They assembled a cross-functional team, including designers, developers, and product managers, and began conducting week-long sprints to prototype and test new features.

Through this process, the team was able to rapidly iterate on ideas and gain valuable feedback from users. One notable outcome of the Design Sprints was the creation of the "Wish List" feature, which allowed users to save and share properties they were interested in.

The Wish List feature was a huge success, and it helped to significantly improve the user experience on the Airbnb platform. This, in turn, led to increased growth and adoption of the platform.

Since implementing Design Sprints, Airbnb has continued to use the process to develop new products and features. This has helped the company to stay ahead of the competition and maintain its position as a leader in the sharing economy.  Design Sprints has ensured that the company is building the right thing.

Designs Sprints at a glance.

A Design Sprint is a structured process for quickly exploring, testing, and validating ideas for new products, features, or services. It typically involves a cross-functional team working together over the course of a week to develop a prototype and test it with real users.

The goal of a Design Sprint is to rapidly prototype and test a new idea to determine whether it has potential to be successful in the market. By working in a structured and collaborative way, teams can quickly identify potential issues and address them before investing significant time and resources into development.

From a technology perspective, a Design Sprint can be especially helpful in ensuring that the team is building the right thing. By prototyping and testing with real users, the team can gain valuable feedback and insights about the user experience, which can be used to inform the technology decisions that are made during development.

For example, if the team discovers during the Design Sprint that users are struggling with a particular aspect of the prototype, they can use that information to adjust the technology implementation to better meet user needs. This can help to ensure that the final product is not only technically sound, but also meets the needs and expectations of the target users.

A fall from grace.

In the early 2000s, Blockbuster had the opportunity to acquire a small DVD-by-mail rental service called Netflix. However, Blockbuster declined the offer, believing that the DVD-by-mail business was not a significant threat to their brick-and-mortar business model.

Blockbuster also failed to recognize the growing trend of online streaming, and when they eventually launched their own streaming service in 2010, it was too little, too late. Their service was clunky and difficult to use, and it could not compete with the more established streaming services such as Netflix and Hulu.

Blockbuster's lack of foresight in the changing technology landscape ultimately led to their downfall. In 2010, Blockbuster filed for bankruptcy, and by 2014, the company had closed all of its remaining stores.

The importance of building the right thing from a technology perspective whilst at the same time being open to adapting to changing technologies and consumer preferences cannot be overstated. Failing to do so can have serious consequences, even for well-established companies with a large market share.

In summary, you need to do both.  Build the right thing AND build it right!

At Global Kinetic we pride ourselves in this approach that we take in to all our projects.  Our Discovery process is designed to ensure this and includes Design Sprints as a tool to facilitate building the right things.  Our Delivery process has been refined over many years through continuous improvement to ensure we build things right.  To find out more about how we can help de-risk your technology investment and build an award-winning product, contact our sales team now.

The 5 Pillars of Good Solution Architecture: Designing for Efficiency

Designing for Efficiency

Sergio Barbosa (CIO, Global Kinetic)


The wave of cloud computing that hit the tech industry during the first decade of the century brought about the promise of reduced infrastructure costs with on-demand infrastructure utilization.  In layman’s terms you only paid for the infrastructure that you used for the time that you used it.  No longer did you have to purchase a powerful expensive server upfront that was able to handle your system’s peak workloads, and then have it sit idle for most of the time until it was needed.  With cloud computing the promise was that you could run your maximum workloads on powerful servers for the one or two hours that you needed it, and then scale that down to a small server for the rest of the time, drastically reducing your infrastructure costs.

That was easier said than done.  We quickly discovered that for this to be achieved you would need to have system diagnostics to know when you needed the big server and when you needed the small one, and for how long.  That means you needed to build this monitoring into your system from the onset so that the system can give you the diagnostics you need to make infrastructure decisions.  But not all systems are that predictable.  There are four basic models, and a single system can have a combination of these models if it is a more modern and modular or microservices-based system.

The microservices that power the finance department of a company for example might have very specific predictable demand at month end when payments are made and reconciliation processes are run, whereas the microservices that power the onboarding of new customers may have an unpredictable demand as some external forces could drive demand for new customer sign ups that weren’t previously anticipated.

Some systems may have a requirement for an on-premise component for whatever reason, and hybrid infrastructure architectures are very common.  It is important to ensure that your on-premise infrastructure does not become a bottleneck for your elastic cloud infrastructure in hybrid scenarios.

A good way to approach cost efficiencies for a system is to organize the infrastructure being utilized.  In most cloud environments you can make use of subscriptions, resource groups and tags to assign resources to different cost centres within a large enterprise.  Organizing system resources like this will help you optimize the spend.  Optimizations can be done at an IaaS (Infrastructure as a Service) level with compute and storage provisioning, or at a PaaS (Platform as a Service) level with database, blob and orchestration services like Kubernetes provided on demand by most cloud providers.

As mentioned before, key to understanding where you can optimize a system and make it more efficient from a cost and/or utilization perspective (we all want to save the planet right?), is through monitoring.  The formula is simple; Monitoring + Analytics = Insights.  Core system monitoring involves four specific things:

Now that you have your monitoring in place, you can start working on automation.  Automation can add incredible efficiencies to operations.  There are three main areas of automation that you can focus your energies on:

Designing for efficiency up front can add immense costs savings to your solution in the long run.  Building the metrics, diagnostics, health checks, automated tests and IaC to an existing code base is a near impossible task and the costs will undoubtedly outweigh the benefits.  Build these in upfront and reap the rewards.  Continue monitoring your system over time as system usage evolves and changes.  This way you will always can improve the efficiency of operations in the systems that you build.

If you missed any earlier parts from our series on the 5 Pillars of Good Solution Architecture, click here to read more.

Has Web3 got its priorities wrong?


Co-founder and CIO of enterprise software development house, Global Kinetic, Sergio directly heads its open banking platform, FutureBank. A skilled software engineer, innovative product developer, and keen business strategist, he has participated in several notable fintech milestones, including building the southern hemisphere’s first digital-only bank all the way back in 2002.

Surveying the sorry state of consumer privacy a couple of years ago, Alan Rusbridger hypothesized a privacy “techlash” in the Guardian. In it, he nodded to a Washington Post tech journalist’s description of us “gleefully carrying surveillance machines in our pockets”, but he wasn’t calling on us to throw our phones into the Thames just yet. He felt encouraged by developments like edge computing, encryption, and blockchain:

“One estimate is that there may be 200 or 300 startups, SMEs and entrepreneurs rethinking the ownership and value of data. Finland’s MyData project is just one high-profile attempt to let individuals regain control of their own data. Other players are exploring how blockchain can strengthen privacy as a basic consumer right. The jury is out – and doubtless will be for a while yet.”

Yes and no. It’s two years later – we’ve seen an explosion in use of Signal, DuckDuckGo, DeFi, and NFTs – but the jury’s still hotly debating that exact question: the role of blockchain in protecting PII.

Enter witness for the prosecution Moxy Marlinspike.

For those who don’t know, Moxy Marlinspike is a highly respected cryptographer and digital security specialist, a former head of security at Twitter and the founder of Signal, the privacy-optimized answer to WhatsApp. In January, Marlinspike wrote a blog post on his impressions of Web3 in its current state and his thoughts about where it would go.

Given his high profile, technical expertise, and articulate, deliberative style of communication, the post was always going to draw readers from the techie scene. His negative assessment, relying in part on his own eye-raising, real-world experiences, meant it got a lot more attention than that. It seems anyone and everyone has said something on the piece – now, me included.

Read Sergio’s earlier blog post on Web3 here.

Web3 and the problem with servers

Web3’s idealists hope that by jumping the shiny tracks laid by the Big Tech companies, we will snatch back our privacy and reestablish personal autonomy and control within decentralized networks of computers owned by, well, just about anyone. But, in his post, Marlinspike points to a flaw in the plan:

“When people talk about blockchains, they talk about distributed trust, leaderless consensus, and all the mechanics of how that works, but often gloss over the reality that clients ultimately can’t participate in those mechanics. All the network diagrams are of servers, the trust model is between servers, everything is about servers. Blockchains are designed to be a network of peers, but not designed such that it’s really possible for your mobile device or your browser to be one of those peers.”

Servers are everywhere but in consumers’ hands. Since the average Joe or Jane only has clients (browsers and mobile devices) at their fingertips, their access to the system must be mediated by third-party–owned services provided through servers called nodes. “They’re like the gateway to the blockchain realm” says QuikNode, a provider.

Gateway or gatekeeper? Jack Dorsey and the Bitcoiners believe that already powerful crypto ventures have made accommodations for the sake of speed and functionality, which has weakened security and consolidated power in only a few hands – by making setting up and running independent nodes difficult, for instance. The benefits of the blockchain are being wasted in attempts to kickstart new network effects and maximize profits for VCs and early adopters, they say.

Marlinspike may or may not agree – he’s playing philosopher king or elder statesman to Dorsey’s freedom fighter here. It’s just that he doesn’t see control of nodes as the problem per se. He’s adamant that no-one – not even “nerds” – wants to run their own servers and it’s by ignoring that fact that we risk repeating history: “To make these technologies usable, the space is consolidating around… platforms. Again. People who will run servers for you, and iterate on the new functionality that emerges. Infura, OpenSea, Coinbase, Etherscan.”

He makes the case for a re-do:

“We should accept the premise that people will not run their own servers by designing systems that can distribute trust without having to distribute infrastructure. This means architecture that anticipates and accepts the inevitable outcome of relatively centralized client/server relationships, but uses cryptography (rather than infrastructure) to distribute trust.”

He believes this will help prevent Web3’s platformication, something that is already well underway. At present, OpenSea has around 95% of the global NFT trading market cornered, with volumes 12 times its closest rival. Ethereum had a similar stranglehold on decentralized finance at the start of 2021 but has lost share as it struggles to scale. Infura and Alchemy control almost all of the market for node services. Coinbase has over half of bitcoin trading wrapped up. It’s no surprise that Coinbase didn’t make a splash at Bitcoin 2022 this year, the biggest crypto event in the world. There’s no need.

OpenSea has around 95% of the global NFT trading market cornered, with volumes 12 times its closest rival. Ethereum had a similar stranglehold on decentralized finance at the start of 2021 but has lost share as it struggles to scale. Infura and Alchemy control almost all of the market for node services. Coinbase has over half of bitcoin trading wrapped up.

Defenders of the evolving crypto ecosystem say that there are more and better alternatives to these providers popping up all the time, but that’s missing the point. As CoinDesk reporter Will Gottsegen wrote in October last year in relation to NFTs: “Decentralized computing doesn’t necessitate a decentralized market structure.”

It’s Wild West stuff, this

Published a month ago and with 5.5 million views and counting, Dan Olson’s YouTube demolition job “Line Goes Up – The Problem With NFTs” might put him up there with Marlinspike in the rankings of influential cryptocynics. It’s “viral”, if something over 2 and a quarter hours long can be called that. Discussing the video, Casey Newton at Platformer wrote :

“[I]t’s undeniable that today web3 is a mess — and not just in a ‘we haven’t finished building it’ sort of way. Web3 is a mess of a kind that it could take five or more years to fix, and that assumes the work gets started soon. And the thing is … I’m just not sure people are working on these things.”

Like, what things? Well, privacy and security. “It’s hard to imagine a bigger hurdle to the mass adoption of blockchain technologies than the absence of basic trust and safety features, and yet to date, we’ve seen very little,” says Newton, suggesting that few crypto insiders really care enough to prioritize solutions.

When Time asked economist, crypto investor, and Twitter influencer Tascha Che to answer Olson’s charge that aspects of blockchain technology encouraged fraud, she replied that blockchain was no more secure than centralized databases: “The point of the system is a revolution in how we distribute value. The point is not inventing a system that is more secure than the centralized system.”

Security – and particularly fraud prevention – ought to be hard-baked into a system like Web3 where transactions are irrevocable. It needs mechanisms to ensure only legitimate transactions take place.

I’m not sure that’s something you want to put in the brochure. Security – and particularly fraud prevention – ought to be hard-baked into the Web3 world where transactions are irrevocable. It needs mechanisms to ensure only legitimate transactions take place. There isn’t anything like this currently (apart from Bitcoin itself, of course).

Remember the businesses running nodes to which consumer-side clients must connect in order to access the blockchain and use Web3 applications? On a “zero-trust” system, their word is taken for gospel, for no other reason than that Web3 apps almost never authenticate the information they pass to and from the blockchain. Marlinspike blogged:

“These client APIs are not using anything to verify blockchain state or the authenticity of responses. The results aren’t even signed. [...] So much work, energy, and time has gone into creating a trustless distributed consensus mechanism, but virtually all clients that wish to access it do so by simply trusting the outputs from these two companies [Infura and Alchemy] without any further verification.”

These apps aren’t using even the most basic security best practices, and it’s the same for wallets, the actual stores of value, because they’re clients too. Information may have been tampered with; it may not even be coming from where it should. You wouldn’t know.

Web3 is still small and dominated by relatively few companies. It does seem odd that they haven’t yet taken the opportunity to address a matter so central to its future success: security.

Finding alignment on large-scale security issues among many stakeholders is a challenge at the best of times; in a decentralized system, it can seem impossible. But Web3 is still small and dominated by relatively few companies. It does seem odd that they haven’t yet taken the opportunity to address a matter so central to its future success. In the minds of consumers, FOMO doesn’t apply to being hacked. An app isn’t going to replace the need for collaboration.

Safety first

Despite a decade of work and the enormous amounts of money being thrown at it, Web3 remains an insecure if not dangerous place for the initiated and uninitiated alike. Marlinspike is one of many who have made the point, and it is arguable whether its vulnerability to recentralization is a bigger threat to adoption than that.

A look into the Web3 job jar

For financial institutions exploring Web3, it certainly does look like the next version of the Internet – soon to enter its tweens – has a lot of growing up to do.

Most banks and credit unions will act towards Web3 as prudently as they always have; I probably don’t need to advise them to make any investments in technology very carefully. Similarly, I don’t have to remind them that it isn’t necessary to risk it and build it themselves. Platforms like FutureBank can provide them a highly-secure native integration to the freewheeling new world of opportunity and set them up fast to take advantage of fast maturing use cases like embedded finance.

Wondering about doing business on Web3?

Contact Global Kinetic for our assessment of the risks and rewards.

The Next Crypto Milestone - Capital Markets

The Next Crypto Milestone - Capital Markets

By Okker Botes - Data Architect, Global Kinetic

For those paying attention, we are watching the emergence of a new era for the global financial system.  Just as the power of the Internet slowly emerged in the 1990s and allowed the world to connect and interact in ways unimaginable before its existence, we are now witnessing the emergence of the ‘Internet of Money’, a phenomenon that will provide a network of roads for finance.

Decentralised finance, or DeFi, has significant potential to disrupt the financial system at an institutional investment level.

I feel privileged to be a witness to the institutional adoption of virtual currency.   Blockchain evangelists have talked about this moment for many years, and now we see it happening. Even more exciting is that I can be part of shaping this future.

Almost every day, leaders in crypto adoption, with a deep understanding of the market, try to sell the idea of a new financial system to incumbents and novices entering the crypto market.  It is a complex concept and there is a need to translate information into small chunks to help large financial institutions digest and understand.

Decentralised data exchange platform

There is an increasing focus on tokenisation, and we see the emergence of projects like Chainlink, a decentralised network of oracles that provide real-world data to blockchain networks.  A staking mechanism is used to guarantee the accuracy of the data they supply.

The ultimate vision, however, is to have a data exchange platform that is decentralised and open-source and not owned by any one entity.    The sharing of data can be monetised through non-fungible tokens (NFTs) which also act as the public-private key that can unlock the shared data.   To be clear, a NFT token is used to uniquely identify the data through a simple hashing algorithm so the shared data will not be public.   It's perfectly conceivable that this security NFT token could in itself be tradable as a right to access data and as a token of ownership.

Data Exchange Platform

Multiple stakeholders

In a platform like this, I envision multiple stakeholders such as Data Custodians, Data Stewards, Identity Holders and the Data-sharing Platform.  The guiding principle is that the Identity Holders have the sovereign right to allow or deny access to the data while getting paid for the use of that data.  The “all boats must rise” principle applies, where Data Custodians, Data Stewards and the operators of the network, as well as the Identity Holders, get their share of the valuable data that Identity Holders own but custodians can share. Such a platform aims to legitimize the trading of personal information which could include investment in institutional funds, transaction history and other key financial and personal data. This legitimisation can power a more sophisticated financial system with privacy at its core.

Integration of old and new

There is certainly value in the current financial system constructs, but I foresee a time when the blockchains and products currently in development can be leveraged by the existing financial rails and ultimately become the backbone of both the old and the new.  A key milestone in this journey will be integrating the traditional systems that drive the capital markets with the crypto world.  Integration will be a necessary step in the maturity ladder if institutions are to adopt crypto and ultimately help to build the future financial system, where blockchains become the ultimate custodian of the data.

In a future where the blockchain is the trading platform and the record of custodianship, it will not be necessary for any third party to calculate and read the transactions performed on a blockchain for a specific crypto wallet address and the real-world assets that a NFT represents.   Yes, I am sure that there is a market for sellable products for the safekeeping of cryptographic private keys, but in essence the value stored in the network is linked to a wallet and can be read by any system that can call a simple blockchain API.

The reality is that investors are not only interested in investing in Bitcoin but in the whole array of crypto products and other asset classes, some of which may have not even been developed yet.   With a data-sharing platform as described here, a new provider would register their available data and authorisation channels and consumers can scan these and use what they need.

To bring it back to the financial institution and incumbent’s business model - set data exchange platforms can create private channels of communication where multiple data custodians can interact with multiple data consumers.

As excited as I am to see how the adoption of crypto in the capital markets unfolds over the coming months and years, my hope is that there is synergy between the traditional and the new and that we can leverage the best of both worlds.

Three keys to digital innovation at scale

Three keys to digital innovation at scale

By Sergio Barbosa, CIO, Global Kinetic

Rapidly changing consumer needs, especially in the wake of Covid-19, are driving a very real need for digitization at scale in sectors as diverse as banking, education, gaming and healthcare. Gartner Inc. predicts that worldwide spend on IT will top US$ 3.9 trillion in 2021, up by 6.2% from 2020 notwithstanding the impact of the pandemic. There’s no mystery in this, of course. In order to remain competitive, enterprises of all kinds simply have to keep up with the pace of digitization.

But the added scale and complexity required now, especially as around 33% of organizations are already running half of their workload in the cloud, poses many challenges. For a start, as the level of complexity increases exponentially, large teams of software engineers need to be involved in the development process. In the world of software engineering, where the ‘pizza rule’ applies, this is fundamentally counterintuitive. Software developers know from experience that as soon as teams exceed eight in number, communication across the team becomes complex and the development process not only becomes more difficult to manage but slows down.

The challenge, then, is to maintain the level of innovation that characterizes small teams while simultaneously scaling up in order to be able to develop, implement and manage organization-wide software solutions.

Our core team at Global Kinetic helped develop and built the first-of-its-kind digital bank in the southern hemisphere way back in 2002 using XP, long before fintech and agile were even a thing, so we have ground-level knowledge of the challenges involved. In an environment in which there’s no room for error, we’ve distilled what we believe are the three components necessary to deliver software innovation at scale.

A culture of learning

In the first instance, it’s important for development to take place within an organization-wide culture of learning; a culture that allows for experimentation. This is essential as developing and deploying software and innovation at scale has to be done throughout the client organization at multiple key points, all of which drive productivity. The process is also iterative and depends on constant feedback between the development team and the client in order to ensure that the solution meets all requirements. In short, one has to be able to deliver stable solutions in an agile way, which requires a flexible mindset.

A further dimension is that, when working at scale, a number of development teams have to be deployed on the project and cooperation between them has to be both organic and seamless. Not only do they need to be able to coordinate planning activities, they need to have integrated channels for analyzing progress, dealing with feedback, implementing changes, managing versions and testing systems for stability and robustness.

Design thinking and an agile process

A design thinking approach is fundamental to this agility. As a company, design thinking and agility is in our DNA and it features in everything that we do. This is encapsulated in our use of managed teams, which work together in a cooperative matrix.

Our agility doesn’t imply going with the flow, though, because in the software environment, it’s vital to maintain a rigorous approach. System quality and integrity can’t be compromised. We nevertheless need to be both rigorous and flexible as we need to be able to adapt to feedback and changing circumstances throughout the development process. It’s a fine balance that takes knowledge, skill and experience.

By its nature, innovation is a process of experimentation and that means being flexible about analyzing needs, identifying problems and finding solutions to those problems. The rigor comes in because the solution needs to work impeccably.

Predictable delivery

Finally, one has to have a predictable delivery model; one that guarantees both quality and timing. The big challenge, of course, is that this model needs to harness innovation and not constrain it. We nevertheless need to be able to deploy the software into strategic key points within the organization simultaneously and without disruption, and to manage it as it gains traction throughout the business.

In order to do this, we have to have efficient tools and processes in place to manage the complexity of the development process, while nevertheless being flexible enough to accommodate innovation as an input. That’s why our Managed Enterprise Software Engineering solution is an end-to-end process that takes every project from initiation right through to final delivery. Beyond that, we provide 24/7 support for all software artefacts, which is again delivered by managed teams.

The Global Kinetic approach

Since Global Kinetic was founded, we’ve been guided by these three principles, and they’ve worked for us and our clients every time.

We’ve successfully helped dozens of large global financial institutions execute digital product and enterprise software engineering initiatives. Using managed services teams, we’ve been able to reduce the many risks and uncertainties common in large custom software development projects. Our goal is to consistently provide quality software and solutions to meet a wide range of business needs in alignment with our customer’s development cycles, and to empower them to innovate.

If you’d like to know more about what we can do for your organization - or are interested in joining one of our teams - please don’t hesitate to be in touch using any of the contact channels available on our site.