skip to Main Content

A Little Bit About AI (And More, Straight from the Engineer.ai’s Mouth!)

Whilst our driving force is for it not to be the case in the near future, today, the subject of software development remains kind of confusing. Tech terms, engineering jargon, coding concepts — it’s not exactly a language one learns in school. So, unless you’re engrossed in this crazy industry already, understanding exactly what Engineer.ai does from a product perspective can be difficult, especially as AI has become a wider used adjective for anything autonomous (a little like “building apps” is the new ‘building custom software'” even though “apps” as a term has largely been linked to mobile software). 

We’ve been growing so quickly, scaling so fast, but sitting around as a group, a couple of weeks back, our CEO and the senior team asked themselves, “Have we been communicating effectively and providing helpful insight to one of our most important stakeholders, our customers, along the way?” As a result and without further ado, here are the common questions we get asked: 

What is Artificial Intelligence (AI)? How does Engineer.ai leverage AI in its platform? 

A simple definition of AI, as quoted by Techopedia, is:

“Artificial intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans. Some of the activities computers with artificial intelligence are designed for include speech recognition, learning, planning, and problem solving.”

Engineer.ai is a Human-Assisted AI platform for building tailor-made software. The platform combines the power of humans, experts in their field with boundless creativity, with AI to create a variety of applications for a wide set of customers. The platform creates much of the ‘app’ and then algorithmically sources human talent from our network to bring it to life, needless to say, by working together. 

We don’t claim to fully automate App Development [the fully autonomous development process (program synthesis) is the holy grail and is many years away]. Rather, we rely on a Human and AI partnership, wherein the reusable software (in traditional software development, duplicated code makes up approximately 60% of the product) is machine produced and the other 40%; what makes most projects unique, is human produced. From as early as 2017, our founder and CEO has coined the word “Human-Assisted AI” because at a core level, we believe that human creativity and talent will always be part of the idea creation and building journey. Realistically, and rather pragmatically, there would be no benefit in Engineer.ai spending big bucks (we are talking tens of millions) developing fully autonomous development when we can deliver the customer benefit by focusing on making automatic what is otherwise repetitive, wasteful or inefficient (the classic 80-20 rule but in our case, the 60-40 rule!)  

To run software, Engineer.ai built CloudOps – which uses a combination of data engineering and statistical models to make real time prediction, infrastructure purchases and exchanges (often running into millions of dollars) in an optimal way to benefit our customers. It’s like we have a mini stock market of cloud capacity and our platform is deciding what to buy and when (Bloomberg). 

Needless to say, whilst we can be proud of our technology and what were are continuing to build, the real truth is that because of what we have built, we have already helped  entrepreneurs & small businesses to:

  • BUILD: Build your idea effortlessly with Builder instantly without any coding knowledge.
  • RUN: Never pay for building the same app again, get warranty for updates.
  • SCALE: Get everything from cloud infrastructure and analytics to paid services.

What on earth is an AI Expert? How many of them work with Engineer.ai?

AI is a new field, as such, there are a limited pool of “AI Experts”. Broadly speaking, you either have Data Scientists, Data Engineers or very strong Computer Scientists with a focus/foundation on pretty intense statistics. The field of expertise is also very connected to the domain; for instance, someone working at a taxi hailing app doing driver routing is not likely to have a huge advantage over someone who has a strong CS/stats background and is attacking a problem related to diabetic retinopathy. In short, relevance is a critical factor. 

We have a number of roadmap items that we (it’s a secret) are working on that may not translate well into existing problems that folks coming from other companies might have worked on, and thus, it’s often easier just to take exceptional engineering talent and train them in the relevant discipline within the wider Artificial Intelligence gamut. 

If engineering talent was hard to find (and we know this having hired a lot in the last 5 months) the supply for AI Experts and deep statistically inclined computer scientists is even more sparse. Naturally, one has to take a pragmatic way to find resources from whatever pools exist, train them and get them to work on interesting production problems. 

In recent times, many of our new team members have come with these backgrounds or they’ve done electives throughout their career/degrees around ML/AI (the most famous being Andrew Ng’s course on machine learning!). 

It was not so long ago that we did an independent tech audit (in 2017 by TechDNA) of the team, they classified our position as below; since then, we believe we have moved significantly more towards the center. The chart shows the size of the engineering pool by quality proximity to the nebulus “AI God Status” globally (in 2017).

Engineer.ai Team Experience (2017)

Engineer.ai (“Elmer”) Team Experience (2017) (red dot)

Our team has had the necessary chops and experience in AI to do what has been asked of them till today; and more importantly, we continue to hire them so that we can continue to build the things we need to have working tomorrow. It is this very pragmatic approach to building that has allowed us to be lean, focus on problems at the right time, and deliver customer benefit from the get go (small fun fact – we bootstrapped to $23m in revenue; $60m in aggregate revenue before we raised our Series A).  

As a business, we also dog feed our own platform, building things on our capacity network (the partner network of Dev Shops who provide the human creators). Because if we expect you to trust the same network, so why can’t we? It’s also a great way to showcase how our bigger customers can balance using internal teams and external teams. The capacity network has access to 132 Dev Shops across 12 time zones; they sit behind our brand and our promise. Yes, it is a little bit crazy, but our customers benefit from this access to specialists around the clock rather than only dealing with generalists, and we have been able to deliver many solutions that require machine learning for certain features (i.e image categorization) in their projects. 

Our teams are actively working on the problem sets around autonomous & AI, and these are some of the core skills they have; Azure ML, Deep MS ML/AL practice, Operations Research, Neural Networks, Bayesian Statistical Learning, Natural Language Processing, Tensorflow with Convolutional Neural Network (CNNs) & Deep Neural Networks (DNNs), SciKit, Numpy, OpenCV, Facial Recognition, Conditional Random Fields, etc., 

How have you used AI to benefit your customers?

In a start-up, people move in and out constantly, which is what makes it even more challenging to keep scaling. It hasn’t been a smooth-sailing ride (and it isn’t supposed to be); but it has certainly been gratifying – not only because of our powerful Series A (Financial Times), but because we see how we have allowed our customers to build things they wouldn’t have been able to before. We have  always practiced the principles we preach (HEARTT) – which are those of heart (empathy), entrepreneurship, accountability, respect, transparency, and trust – and this benefits all our stakeholders. They always know where they stand with us.

We have used the aforementioned technologies and creator network to build software for dreamers and enterprises much faster and at a fraction of the cost as compared to industry benchmarks. There was significant reuse and benefit from collective learning thus allowing Engineer.ai to build the solution quickly. 

Our platform, and I know I sound repetitive, is a partnership between exceptional human talent and AI for scale. From 2018 onwards, the company has rolled out more intelligent automation; it has taken us time, sweat, and tears – but we believe in doing it right, or not doing it at all. 

Where is the AI? Who Moved My Cheese?

Here are some of the things that have been achieved because of the early platform that Engineer.ai built: 

More efficient building

  • We’d had a percentage of customers purchasing custom software online without any human intervention. This is because 100% of pricing and timelines are done autonomously. We leverage a price discovery system that uses Natural Language Processing using a Neural Network. 

  • We source and allocate 100% of the creators from our network autonomously and use facial recognition of developers to prevent fraud on the network. This means we can now detect whether different people are working on the project versus the ones origin
    ally assigned. 

    Using facial recognition to verify and authenticate developers on our network

    Using facial recognition to verify and authenticate developers on our network.

  • All developers on-boarded on our platform are evaluated using a theoretical examination and practical assignment which leverages automatic code rating. This allows us to build a system that decides whether to use an advanced developer for work that is more complex or a beginner for basic work. 

    BuilderHive Screenshot

      Multiple Choice Questions before the code assessment.

There are many more features in production or in beta, from common code reuse to a RunEngine that allows seamless interfacing without human involvement. 

Scale better with CloudOps:

  • Churn prediction model for CloudOps not only allows us to manage our revenue (important) but allows us to signal our capacity purchase platform how to forward buy new cloud capacity so we don’t over purchase. 
  • We had originally built a usage prediction model based on supervised learning, but later realized it was not as efficient and then replaced it with a more robust, classical, Holt-Winters method; which many say has a better accuracy. 
  • In CloudOps we use a combination of data engineering and statistical models to make real time infrastructure purchases (in millions of dollars) and exchanges in an optimal way. 


What are the datasets that Engineer.ai has collected to use it for process improvements and to benefit customers?

In all these years of intensive work, Engineer.ai has collected a lot of data, even back in 2017, as the Tech Audit review had mentioned. To give you a sense of the volume:

  • Detailed CloudOps Usage (over 600M records)
  • Builder Modules usage (2055 feature uses over last 130 projects, ~16 features per project)
  • User stories mapped to features (over 22,000 stories)
  • Infrastructure mapped to modules (~60 recipes)
  • Common errors mapped to features (over 18,000 bugs)
  • Code component commits mapped to a feature (over 10,000 commits)

Can the company provide some facts and statistics on how the platform has benefited various stakeholders?

The platform’s use of automation/AI has already allowed the company to show significant benefits to the customer. Take an average regular Dev Shop vs Engineer in terms of time. Our AI, Natasha (as we affectionately named it) already saves the customer time and wasted cost. This will also give you a sense for how we are tackling the problem from the top of the funnel – making it easier for those with ideas to feel compelled to start them. Here is an indicative comparison: 

Table Comparing Engineer.ai vs a traditional Dev Shop

Anecdotal Comparison between Engineer.ai and a Dev Shop


Who are your investors and how are they involved in your business?

As you would see in our announcements of our Series A round in October 2018, we raised $29.5 million which included leading Venture Capitalists (VCs) such as Jungle Ventures, Lakestar, and Deepcore, an AI fund that is fully owned by SoftBank Group in Japan. We work very closely with the VCs as they have representation on our board and we get very timely guidance from our board in areas such as technology, sales and marketing, financial matters, and human capital. We are very enthralled that we have the right partners who are working closely with us to continue scaling at the pace which we are today.

Can you tell us about your presence globally?

We are currently present through offices in New Delhi (NCR) (India), London (United Kingdom), and Los Angeles (the United States of America). And we’re not stopping there. Our expansion plans for the year include expanding our humble operations in Tokyo, Singapore, and Dubai.

Back To Top