software development agency
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Our Articles

MVPs
Why can AI become a good choice for venture capitalists?
September 10, 2024
11 min read

In the Innovantage podcast, Sigli’s CBDO Max Golikov talks to tech experts and entrepreneurs about their vision of how artificial intelligence is transforming the world. The 4th episode covers much more than that. Leesa Soulodre, who was the podcast guest, explained not only the role of technology in modern society but also the role of society in tech progress.

In the Innovantage podcast, Sigli’s CBDO Max Golikov talks to tech experts and entrepreneurs about their vision of how artificial intelligence is transforming the world. The 4th episode covers much more than that. Leesa Soulodre, who was the podcast guest, explained not only the role of technology in modern society but also the role of society in tech progress.Check out the full Innovantage episode with Leesa Soulodre here: https://www.youtube.com/watch?v=D5oANROV8X4&t=1373sLeesa is the founder of R3i Group and the Managing General Partner at R3i Capital, a deep-tech cross-border venture capital firm that focuses on AI and sustainable development.Deep-tech startups: Key pitfalls on their wayLeesa’s firm helps projects connect to capital, customers, and non-dilutive financing that lets startup founders keep full ownership of their companies.Today, founders who work in deep tech (or in other words, who build projects that are based on high-tech innovation or significant scientific advances) traditionally face three main challenges.A commercial challenge, or commercial Valley of Death. That’s the period when a startup has already begun operations but hasn’t generated revenue yet. Founders need to get over this period to make sure that their product does what it says on the tin and has value.A technical challenge. It’s important to demonstrate that a product is capable of consistently performing in the same way every time so that it can be safe for the person to use it.An ethical challenge. This challenge is related to the fact that everything that we use today almost always doesn’t have a kill switch. Therefore, inherently a product needs to be safe in its provision.How to make sure that AI developments are safeWhile talking about the safety of using AI products, Leesa recollected some other well-known cases. The proliferation of Airbnb made everybody’s homes available to guests. This led to the need for the implementation of trust and safety teams. With the growing popularity of taxi services, everybody’s car can be used as a taxi. This again highlighted the demand for such trust and safety teams.What do we have to deal with in the case of AI? The situation may look rather alarming. In fact, deep neural network compression technology could be used to kill more people faster and with less energy than other technologies.And if you consider that every AI product can be used for such a purpose, it becomes obvious that we need to maintain this notion of trust and safety teams. This is vital to make sure that our technologies are not misused for unintended purposes.Any AI organization with as much influence as OpenAI has that is not going to invest in trust and safety teams will face significant legal and regulatory hurdles. Moreover, such companies will have even more issues with further growth and innovation in the future if they don’t gain implicit trust from their user base.Regulation in the AI spaceWhen it comes to the regulation of tech companies, there always have been some controversies. One of the main reasons for this is that regulations don’t tend to catch up fast enough with what tech companies are doing.In the discussion of this aspect with Max, Leesa mentioned that she sits on the board of the AI Asia-Pacific Institute. She communicates with representatives of the governments in the region. The governments want to build safety rails for technologies, and AI in particular. But there is a significant barrier.Let’s take Singapore as an example. The absolute majority of the registered Generative AI companies are just starting their business journeys. They are at their seed or pre-series A stages. This means that quite often they do not even know what they have on the tin. They do not know the value of their products. That’s why it’s natural that they are not ready to invest in regulatory oversight. Given this, there is no sense asking them to do that.Leesa believes that it will be more sensible to build guard rails into the fabric of the major technologies that underpin new products and solutions created by startups.For example, many GenAI companies are building their tools on the back of the technologies developed by OpenAI, Microsoft, or Amazon. So it makes more sense to start with these tech giants. They need to comply with regulations first.Is the use of popular LLMs the key to success?While talking about mature AI technologies that startups can rely on, as an example, Leesa mentioned Hugging Face. It is a versatile platform that is widely recognized for its open-source repository of multiple large language models (LLMs).Leesa’s VC firm works a lot with startups. Around 2,000 projects claimed that they were using Hugging Face. At a close investigation, it was revealed that only 200 were truly using it. And only 16 were fundable in the opinion of R3i’s experts.Today there are a lot of players with similar offers. Leesa noted that both open-source and commercial models, like ChatGPT, can be a good option for new technologies. But here, it’s vital to understand that they serve different purposes. For example, ChatGPT is perfect for converting high volumes of always the same automation tasks.As an investor and technologist, Leesa is interested in finding technologies that will work as efficiently and safely as possible and bring the highest value.She said that she doesn’t invest in ChatGPT-like solutions. She looks for applied AI technologies around critical infrastructure and highly regulated industries. The projects that are the most interesting to her are those that can bring tangible results. They can power transformation from point A to point B in such domains as smart cities, energy, healthcare, industrial manufacturing, water management, agriculture, mobility, space safety, security, surveillance,For instance, she mentioned that her VC firm often invests in technologies for renewable energy. Already now, it is a highly regulated sector, despite being comparatively new.Deep tech investing: When is it a good idea to support an AI project?In the discussion with Max, Leesa mentioned that today there are a lot of AI-related projects that may look quite appealing for investors. But in reality, they may turn out to be a mouse trap.Working with the deep tech industry, Leesa prefers to invest only in those projects that have deep scientific research and technological invention behind them, which can be proved by a patent pool or a data mode. And if a project has a data mode, it should be its own, not the one that Microsoft or OpenAI possesses.But what are these things? And why do they matter?Leesa explained this with real-life examples. When scientists at the university invent something, they need to protect this from being copied or misused. Almost always in such cases, they can apply for a patent that will protect an idea or an innovation. If somebody else wants to use this innovation, they will need to obtain a license.However, Leesa warns about one serious challenge related to patents. When you publish a patent, everyone can learn what it is about. Unfortunately, at the moment, patents, especially in the software development industry, are not protected well enough.Patents themselves can be viewed as assets. Even if a project fails or the development of the technology is frozen, founders will still have a patent that can be further sold.As for data, it also can be monetized. If your company is carefully collecting, identifying, classifying, and tagging data, you (or somebody else, who will get access to it) can use it to create new products or power the existing ones.Generative AI for patents: Can we trust it?While talking about the capabilities of generative AI, Leesa stated that it is fantastic for ideation, and especially brainstorming. Nevertheless, it’s vital to understand that such models are hallucinogenic. It means that they can provide wrong or irrelevant answers. It may happen because the training data was incomplete or biased. And that is just a bright demonstration of the “Garbage in, garbage out” principle. Moreover, hallucinations may happen because AI models often lack constraints that can limit possible outcomes.That’s why we can make the following conclusions from such a situation. First of all, we always should be very careful and attentively check whether the received information is true. And secondly, despite the advancements in GenAI, we still need human creativity, empathy, and ingenuity. That’s what AI can’t ensure at the moment.Innovation timing: Cinderella effectIt’s not a good idea to come to the ball too early, as nobody will turn up at that time. But you also shouldn’t come too late as you will miss all the fun. You should be just in time. That’s why before introducing a new technology, it is necessary to analyze whether the market is ready to receive it as well as to consider the key barriers to its adoption. The psychosocial aspect is important. People should trust you and your solution.It’s vital to listen to different opinions to detect possible unintended consequences that may stay unnoticed for founders. When a team is working on a new product, they have only one perspective. But when you are building something new for a community, you should know what impact your innovation will have on it. It’s also worth mentioning that the impact on one community may differ from the impact on another one.It may sound surprising but in many cases, it will be very sensible to listen to children as well. Today, there are even some tech events for kids. And that is a very good trend. One day they will become active users of technology. That’s why their voices, their questions, and their doubts also have value.Speaking about the technologies that Leesa has invested in, she mentioned a couple of examples. She described them as absolutely revolutionary from the perspective of activation and implementation.Quantum Brilliance. The company works on room-temperature diamond-powered quantum computing. In other words, thanks to the use of synthetic diamonds, quantum accelerators will be able to work at room temperature. Though the history of this project is just beginning, it promises to bring quantum computing to a wide audience and make it everyday technology. This approach will be able to revolutionize every facet of a smart city, including security, drug discovery, material science, data operations, etc.ViewMind. That’s a brain health company. Its technology can look into your eye and capture millions of eye movements in a single view over 10 minutes. Based on this, it can determine with a very high degree of accuracy what level of degeneration you have or are likely to have in your brain. Such examination can help to manage diseases like Alzheimer’s, dementia, multiple sclerosis, Parkinson’s, or even post-traumatic stress for a soldier. This technology can demonstrate which area in the brain is affected and help to deliver personalized treatment. These types of technologies can absolutely change our lives for the better. They can move us from what we call treatment to the prevention and prediction of diseases. In the case of healthcare, such an approach is of great value.The most promising, value-based technologies should do something at least slightly better than it is done today and can greatly change the way we perceive something. For example, massive carbon emission reduction technologies change the way we think about the use of energy and water.What is the greatest threat to economic growth?While talking about new technologies and economic prosperity, Leesa said that one of the biggest concerns is piracy, both physical and digital.Piracy is one of the factors that can affect supply chains, steal jobs, and put economic development under threat. For example, when digital versions of books are provided for free, people who contributed to their creation lose their wages.Nevertheless, despite a huge negative effect, sometimes it is possible to detect some positive sides. This way of distribution can play an important role in the digital preservation of some media that no longer generates profit but could still be valuable in terms of history, art, or culture. Moreover, this can open new opportunities for those who have limited access to legal distribution infrastructure.How to decide where to investDifferent investors may apply their own methodologies to their decision-making process. Leesa shared that at R3i Capital they also have their own philosophy when it comes to choosing projects. One of the most important things that they pay attention to is the team.To avoid unconscious bias, R3i Capital relies on an AI engine built in cooperation with Hatcher. As a result, every team goes through the same filters and the result of such evaluation is as objective as possible.Thanks to this approach, absolutely everyone has the same chances. Such a system allows the VC firm to give voice even to those groups that are often ignored, like women and minorities.Moreover, it’s very important to analyze how fundable the company is and what its likely impact is. According to Leesa, while doing this, it’s highly required to keep 100% transparency. This will ensure the desired trust between founders and capitalAfter all, capital markets do not need to be brutal. They should be fair instead. This will help to achieve a win-win interaction.Why does sustainability matter?Leesa explained that with its investments, R3i supports tech companies with a tangible ESG product impact. In other words, they focus on products that prioritize environmental issues, social issues, and corporate governance.However, investors sometimes say that they do not care about the environment and sustainability, they care about money.But how can a healthcare product not improve patients’ lives if it is a good healthcare product? The same is true about energy, cybersecurity, mobility, and other industries.Becoming more sustainable, environmentally friendly, and socially valuable doesn’t mean getting less money. In fact, often, it can even mean more money. This can be explained by people’s willingness to pay for the things that can enhance the quality of their living.If an offered product harms people or has massive negative consequences, society won’t trust it.As venture capital firms are interested in long-run outcomes, they try to make bets on winning technologies. Sustainability businesses that focus on social and environmental effects are definitely among them.Winning together, not aloneAt the end of this discussion with Max, Leesa shared her thoughts about the role of society in innovations. One of the key recommendations that she can give to everyone is to be more empathetic to each other and common problems.Sometimes when founders can’t get financial support from governments or corporations, they can receive help from other people who care about solving the problems addressed by their projects.To achieve success, it’s very important to take the next first step. And we can’t do it alone.At Sigli, we share this vision and that’s one of the reasons why we create the Innovantage podcast episodes.If you are also fascinated with the capacities of AI and other emerging technologies, as well as their power to change the world, stay with us. New inspiring ideas are coming soon!
Generative AI Development
Has AI become mainstream now and are we ready for that?
August 6, 2024
10 min read

In the second episode of the Innovantage podcast, Max Golikov talked to Vasil, the Chief Delivery Officer at Sigli, a person who was captivated by AI long before it became available to a wide audience heard about. This sphere looked completely different from run-of-the-mil computing which made it extremely interesting for him. Being inspired by such films as Terminator and Star Trek, Vasil chose AI as his major.

Today, when the AI revolution seems to be gaining momentum, for businesses it’s very important not to miss their chance to join it, or maybe even to head this transformation. At Sigli, we want to help you gain a competitive advantage by explaining how you can leverage the power of this technology.Check out the full Innovantage episode with Vasil Simanionak here: https://youtu.be/osnlRp0RMT8?si=qT6OYYcbyiVTI8OeIn the second episode of the Innovantage podcast, Max Golikov talked to Vasil, the Chief Delivery Officer at Sigli, a person who was captivated by AI long before it became available to a wide audience heard about. This sphere looked completely different from run-of-the-mil computing which made it extremely interesting for him. Being inspired by such films as Terminator and Star Trek, Vasil chose AI as his major.In a dialog with Max, Vasil shared his vision of the past, present, and future of Artificial intelligence and named the task that he will never delegate to AI.In our article, we’ve gathered the most interesting ideas from this discussion and we hope that you will find them quite insightful.AI: When everything beganIt would be completely wrong to say that AI appeared together with ChatGPT or 1–2 years earlier. In reality, some products powered by AI of this or that kind were developed quite long ago.The first expert systems were delivered around 50 years ago and they already represented an example of a very narrowed AI. Of course, their capabilities, as well as use cases, were rather limited.For example, such systems could have been used by a lawyer in some specific cases. Lawyers often need to ask standard questions to their clients, like the place of birth, the date of birth, the place of residence, etc. Based on the answers to these questions, an expert system can prepare a document that will be further submitted to some authorities or used for other purposes.So what are expert systems? They can be defined as early forms of AI that rely on a set of rules provided by human experts to make decisions or solve problems within a specific domain.The development of these solutions is related to usual coding stuff because such things are based on conditions like “If something — Then do something”. The main task and challenge in this case is to define the right rules. This means that human experts who work on these rules should deeply understand the specificity of all the related processes.Is ChatGPT an example of AI?The next stage of AI development is something that is considered to be AI in our modern understanding.While expert systems were difficult to understand for the general public and they had only specific narrow use, with ChatGPT-like models everything is different. They have gained enormous public attention and they are available to everyone. These solutions allow users to input queries and get clear results.While talking about that kind of system, in the majority of cases namely ChatGPT will be mentioned and that’s an example of excellent marketing and branding.The majority of people definitely consider ChatGPT to be AI. But is it true? While talking about that Vasil highlighted that the correct answer depends on our perspective and exact understanding of artificial intelligence.On one hand, large language models (LLMs) do not have common sense but they can process data. They are built on neural networks that mimic the human brain.A neuron has, for example, two inputs and a single output. If the first input is triggered, an output will be triggered. If the second — an output won’t be triggered. In networks, neurons are put in millions of layers. Users need to make an input and wait for an output. That’s how they work.When it comes to deep learning with LLMs, we do not define the underlying model to process this data. We just define a kind of infrastructure with the neural network where we have a lot of neurons and they are interconnected at different layers.We throw data and expect the result. But even a creator of this model has no idea how an LLM will answer.Due to the huge media influence, today these ChatGPT-like solutions are widely believed to be true AI despite some limitations in their capabilities.Basics: What is AI?AI is a huge set of everything related to something that machines can do quite similar to what humans can do. Of course, people can calculate but a calculator is not an AI solution. So we can say that in the context of AI, machines should do something as well as humans can or maybe even better.Despite all aspirations around AI, it is still a tool, not a different species or something like that.Different levels of AIToday, we can define several models (or levels) of AI. They differ from each other not only in their functionality but also in how they deal with data. Let’s briefly summarize them.Expert systemsAs described above, expert systems do not actually work with data. These systems are nice straightforward tools but they do not provide you with the impression that you deal with intelligence.ML modelsML systems work with some data but there are no strict rules. Engineers and analysts define the model of how this data should be gathered and processed. So we have control over how the solution will work with our data. We throw this data into this model and we check how to use it.A good example here is an ML-powered app for the real estate market. You can input different parameters like the size of an apartment and its location, while an app calculates the price depending on the parameters.Large language modelsText models are the simplest ones of this kind. They operate on the text input and can convert this text into a new one. Here, their work can be compared with the work of programmers who need to convert requirements into code.When an output offered by the model is not good enough, a user can provide feedback. In such a way, a model can be trained to ensure better outputs.Moreover, there are a lot of talks about the quality of data used for training and their origin, such as, whether they were obtained and used legally or illegally. However, there is still no single opinion on that.Will humanity be killed by AI?That’s one of the questions that may sound really controversial and sometimes even a little bit naive but it’s really interesting how AI experts answer it. Vasil provided a quite worrying reply. He said that everything depends on our behavior. Nevertheless, it’s not a reason to look for ways to be as good to AI as possible in order to survive. It’s just a reason to study this aspect a little bit deeper.According to Vasil, there is a possibility that AI will exterminate humanity and there is also a possibility to see a dinosaur outside. But still, it is just a possibility.Our future, and our chances to stay alive:), will depend on how AI-powered solutions, including LLMs, will be designed and how we will use them.If we let any ChatGPT-like solution interact with the internet, it will be able to perform rather complex tasks. For example, it will be able to start a website, buy a domain (if you give it some money), and create a no-code or low-code platform.Even an LLM can interact with the real world and actual AI can greatly mimic a human not only in text conversations but also in live streams. If you have ever seen videos with lifelike talking faces generated by Microsoft’s VASA, you know that they can be very convincing.So can AI overtake the world? Theoretically yes. But only if a human lets it do this.Day-to-day applications of AI for actual businessesIn the conversation with Max, Vasil named several examples of widely adopted business use cases of AI.Content generation. AI can be also applied in numerous situations when it can take some input from the user and create some kind of content based on it. AI can compose a good text for your email even if you have just a couple of bullet points.Summary creation. AI can be a great helper in ingesting the content that was created by someone else. For example, let’s imagine that you have a 20-page PDF file and you need to get a general understanding of its content, how much time will you need? What if this document contains 200, 2000, or 20,000 pages? AI can process it and offer you a quick summary much faster than any human can. What is even more surprising here is that for AI, 20,000 pages and 20 pages are just the same.Support services. AI doesn’t get tired, it doesn’t get distracted, it doesn’t have bad days. It has no emotions — and it is its win part. That’s why you shouldn’t hesitate to ask as many questions to AI as you have. It won’t be annoyed. Vasil admitted that in his everyday work, he also does this way in order to get as much relevant information as possible. When tested with humans, AI turned out to be more polite and tolerant. That’s why AI-powered apps can be a good choice for first-line support services that deal with general issues and common queries before proceeding to specialized help.AI is always willing to help and can reduce the time to answer to a client. However, in this context, it’s important not to omit a financial factor. If you want to get almost real client support that will function practically without human participation, this will turn out to be more expensive than hiring human specialists.How much does it cost to implement AI?The cost of such projects can greatly vary based on various factors and parameters. For example, the basic infrastructure for models like ChatGPT represents a huge number of graphics processing units or GPUs. This specialized hardware is essential for processing complex computations, as well as training and running AI models.That’s why it will be necessary to calculate the cost of GPU rental services provided by Nvidia or Microsoft, for example. They have different subscription models that can address different needs.Moreover, you can opt for on-premises infrastructure and locate all the required software and hardware resources within your physical premises. This model will be also associated with some additional expenses.If we turn to the use of AI models, here, we will also have various scenarios.Vasil noted that in the case of using a commercial model when you do not need to train it, the cost of one query will be a couple of cents. However, when you need to train and finetune your solution, it will be a completely different story. The price will be significantly higher and it’s very challenging to define it.It’s also crucial to bear in mind that with LLMs, you can’t expect a 100% correct result for every query. That’s why to get the desired outcome, several interactions may be required.In any case, the principle of quality-ratio principle works here quite well. The bigger your investment is, the better result you can expect. However, you should admit the fact that it won’t be a human result. Given this, businesses should find a balance between the amount that they are ready to pay and the quality that they will accept.Future of AI: Will it replace human experts?While talking about the future both Max and Vasil agreed that technologies are changing too quickly. It’s very hard to make any predictions for more than 5 years.However, according to Vasil, in the near future, ChatGPT and similar solutions can become great personal assistants. The use of such assistants can go much beyond purely business applications. For example, they will be able to check the health of users, send them reminders, and fulfill a lot of other tasks that will make people’s lives better,Another interesting and highly promising sphere of AI use is communication which is highly important in business.Let’s admit that even when speaking the same languages, we all have different understandings of some things. AI-powered personal assistants can make sure that our thoughts can be perceived by others in a good way.ChatGPT-like systems will be able to translate our ideas into bigger definitions that are more comprehensive for others. They will serve as bridges between people as they can translate not just words one by one. They can translate what is really said.That is a positive side of their implementation. Nevertheless, there is a negative one as well: some translators can lose their jobs.When a human is better than AI?One of the key issues about AI highlighted by Vasil is that you can’t always check whether ChatGPT offers you something that is true or not. That’s why according to him, it’s definitely not the best idea to rely on AI in explaining something to children. Here, a human is an undisputable leader (especially, when it comes to your own child).Of course, there are solutions like Google’s Gemini. In this case, answers are googleable and you can see the source of information. Nevertheless, AI can’t fully understand the context in which a child can pose this or a question. Moreover, human interaction is something that we all need.What skills are vital in the AI era?During their discussion, Max and Vasil also touched on a very important topic about the skills that are required today.Earlier, teachers and books were the sources of truth for the young generation. Then, the internet joined this list. Now, everything is quite unclear.What sources can be trusted? Whom can we believe?That’s why for a new generation, it is very important to develop an ability to check the source of data and understand whether it is trustworthy. A human can be good at some things but can be completely wrong in others. Given this, it’s crucial to have critical thinking and see whom and when we can trust.While talking about the value of AI, Max and Vasil also highlighted the importance of human connection and personal touch in communication. These are something that we should preserve even in the era of AI and significant digital transformations.If you want to learn more about AI, its current role for businesses, and its future prospects, do not miss our next episodes of the Innovantage podcast hosted by Max Golikov.
software development agency

suBscribe

to our blog

Subscribe
Thank you, we'll send you a new post soon!
Oops! Something went wrong while submitting the form.