Advisor vs AI

Entrepreneurship – Advising in the age of AI

Lire cet article en français

Advisor, AI is coming for your job. Are you ready ?

As an investment advisor, along many other advisory focused professionals around the world (including lawyers, strategy and IT advisory…), I watch the rise of AI with deep interest and concern. What should I do ? How can I react and survive the upcoming wave ? Is there a wave in the first place ? If so when will it hit me ?

Very legitimate questions. Let’s stay coolheaded about it, and answer those questions.

Well, we still have some time

As I wrote previously here, I think AI is still unable to efficiently replace humans on complex problems. The gaming industry knows it pretty well. And there is no lack of AI talent there. Of course LLMs (“Large Language Models” like ChatGPT) are a huge step up for AI, and I do think that LLMs will soon improve the treatment of complex problems, including gaming AI, and many more important problems.

But this is not the case yet. In fact, AI is still limited for now in its direct, down to earth, advisory use cases. It will change, it is changing right now, but no, as a financial advisor I cannot right now be replaced by AI.

Many people who are in the business of “advising people” should however look at its development very carefully. Lawyers of course, and in fact all law related professions. Investors also and all others professions which sell intangible advise.

But let’s be a little more practical.

So, what are the current real-world limits that materially restrain me from using or been replaced by AI right now as an advisor ?

Well it kind of sucks actually

The pure technical limitation to the use of LLM for anything but simple problems resides in 2 problems :

Hello ! I am your new advisor. Please teach me how to advise you !

First : for now, the quality is not there. Plain and simple. LLMs and new models are very efficient for browsing the web for knowledge, creating images and videos, creating code functions, and writing long texts that are indeed very well written, but when you get into complex problem, well… it just cannot answer precisely enough. Other “classic” AI algorithm and technologies of course exists, a lot of them, but they have not yet proven that they can replace advisors either.

In the investment field , “Robo”-advisors have been there now for at least 10 years, and still no real artificial intelligence there. Good services for sure, but no replacement for bone and skin advisors. Traditional AI did not really enter the field. It just does not work.

Sure it is getting better and better, and I DO work on it. But I absolutely cannot trust a OpenAI GPT right now to analyse a client’s situation let alone propose an optimization.

The second problem can be summed up by this classic data analysis motto:

Garbage in, garbage out

Using the AI to think and optimize might work in some cases but the eternal advising problem is still there : however smart you or the computer may be, you will still have crappy results if your starting data is incomplete, biased or simply incorrect.

And transmitting huge chunks of data to a LLM is very painful. You have to pre-format it and work a lot on the prompt to insure that it does understand it. You have to repeat and test its understanding and memory. A lot of work. Is it worth it?

Of course of course, this can be largely automated (with APIs), but anyway, I will still need to collect and verify data, and so are other advisors, and this is highly dependent on non automated process, emotional biases and political limitations (third parties are NOT giving up their data for a start).

A machine cannot YET bear responsibility

That is a big advising problem.

It has been a long time since IBM expressed this in a very clear manner, and long before Chat GPT skyrocketed to fame. In one of its 70’s training manual the company stated this as a starting principle :

“A computer can never be held accountable, therefore a computer must never make a management decision”

Apart from the regulatory barriers, which will probably bloom around the world due to this (it already started in EU, we are so good at this), this is a real limit to the use of AI and LLM in particular.

When we pay for expert advice we actually pay for three sources of value : knowledge, experience and responsibility. And the responsibility part is very important indeed.

In fact, and this is especially true for lawyers : what clients buy is not only legal knowledge or experience. What they buy is the certainty of that legal advise, the responsibility. “A lawyer told me that it MUST be true!”.

For now AI cannot give that. It might be in the future though, once quality of advise improves and there is a social recognition that AI advisory is equally or more reliable than human advisory.

In financial advise this could be expressed this way. AI will really be in the game when the household partner who manages finances can go home and confidently announce to his husband or wife “OK, we just lost 30% of our retirement capital, but I just followed the advise of the AI advisor I hired last year, so this is not my fault, right?”.

WHO owns the data ? WHO owns the intelligence ?

As currently used, LLMs are kind of a scam. We all know it.

Smart people (myself not included) give them for free intelligence and knowledge by interacting with them.

We give them our data (sensible one even!).

We give them A LOT of professionnal experience.

My name is OpenAI. Trust me, I am here for all mankind. I just want to make the world a better place !

And what do we own in return ? Not much. Unless we are a very big company with deep pockets that can afford an internal LLM team and proprietary infrastructure, we must all rely on, well OpenAI.

I can train a personal robotic private wealth advisor (the chat GPT allows you this for those who did not give it a try yet), give it all my knowledge, experience, and data… but OpenAI can still kick me out tomorrow with a big padding in the back, thanks and goodbye.

So I naturally limit myself. Especially with sensible information which I have to completely remove. For now I cannot seriously use LLM because I do not own anything of the work and value I might put into it.

I cannot depend in any way of it. This is not a good business decision.

This might change when LLM providers find a way to protect the individual value given by users, which means :

  • All prompts and data have to be transferable to another LLM easily.
  • It must be  completely segregated from other users. This is not only about confidentiality, I do not want to work for others, and I am pretty sure I am not alone. For now I am not sure that the training I do with my personal GPT is not useful to improve the general model (well I am pretty sure of the opposite actually).

Should I be perfectly honest and cynical, I would say that I prefer other advisors to give their time and experience to the model, so that I can use it later for free.

But we should get ready, now.

Yes because, as you read previous part you did notice that many of those problems, can be solved, and will be solved in some way (at least partially). For a start ChatGPT is about to open its “GPT store” which might be a big way for advisor to monetize their work in the future and protect the value they put in the model.

Soon enough there will be a public GPT, accessible for $19 a month on open AI (or else) and that will be able to provide with good (not perfect) quality a very large part of the advise we sell.

So what can we do ?

  1. First, we as advisor, have to move away from defining ourselves as pure technical references. We do not own some one-in-a-kind knowledge. Everything technical we know is available publicly, which means LLMs have that information.
  2. Start learning now about AI. It can be a foe, or it can be a friend. If we want to survive it we need to be able to use it, when it is good enough. And this is not easy. There is no easy learning course “in 10 minutes master the AI”. That means we need to invest time in it. And investing is risky : it might not pay off.
  3. We should in particular master the down-to-earth productivity hack that AI models offer to us right now. They cannot replace us yet, but still, they can boost significantly our productivity in very specific tasks. We must know how to use them. And we should keep learning about them as they improve and expand.
  4. Sell responsibility. This implies taking risks and assuming them. You have to bear the burden of the stress and uncertainty for your clients. That is something very hard to replace.
  5. Work on your pricing as hourly compensation as no meaning against a computer that will always up-speed you. We are selling value, not time. So we need to adapt our pricing so that it correctly reflects the value we are giving.

    I am a robot, but believe me, I will manage your finances just fine with all my new hard-coded emotions.
  6. Finally, focus on intermediation, behavioral coaching and human relations. Soft skills. Deal making. Coaching. There is a human limit to technology. Look at books : still a lot of paper-backed books around. Some people just like the feel of paper, and not been dependent on battery. However smart an AI will be it will still remains very fancy sand and many people will find it hard to believe that it does feel anything. Well to be honest neither are some bad advisors but that is another problem. Be the one how sincerely feel empathy, friendship and rejoice with your clients.

To conclude here is a quote I heard recently and that does sum it up quite nicely :

We have more chance to be replaced by an advisor that uses AI, than by AI alone. But we can be replaced.

Leave a Reply

Your email address will not be published. Required fields are marked *

I accept that my given data and my IP address is sent to a server in the USA only for the purpose of spam prevention through the Akismet program.More information on Akismet and GDPR.