The world is waking up to the possibilities created by AI tools such as generative AI. Let’s explore how these tools could combine to enhance the experience of meeting attendees and speakers at cutting-edge business events two years from now. The process starts before we arrive at the meeting, with our AI becoming aware that we want to attend an event on a particular topic. The AI scans all the forthcoming events and speakers, creates short video summaries of the best speakers, and compiles background information on the destination and associated total trip costs for each event.
Once the event choice is made and the trip is booked, our AI monitors the evolving speaker lineup and delegate list to identify people who might be of interest to us. Those we most want to connect with at the event are then messaged to try to schedule suitable meeting times and locations.
Fast-forward to the event itself and the experience ‘in the room.’ Back in early 2024, we could already see what the cutting-edge Hollywood studios could do with these AI tools and what we, as individuals, could do with a few hours’ effort. Jumping forward to January 2026, mind-blowing developments have taken place. Let’s explore what the cutting-edge possibilities might look like for a global business conference.
Firstly, it is commonplace for each of us to have our own GPT, including all internal and external speakers. These curate every piece of content we’ve ever written and every speech we’ve given, combining it with relevant externally sourced content. It allows speakers to ‘send’ our digital twin to an event to deliver a nearly identical speech virtually or holographically but with a potentially richer delegate Q&A experience. From the client’s perspective, this has the added benefit of being able to source a simulated version of the very best speakers at a fraction of the cost of having them in person.
When I, as the speaker, am there in person, the audience interaction experience has been transformed. My instant access GPT can whisper in my ear or display the answers to delegate questions on an autocue. When represented in virtual form, my digital twin will require no whisper support, and all responses will appear natural and seamless.
So, both versions of me can now remember all that I’d written and said over the years but forgotten and provide complete answers to any question. Going further, I can now draw on my personal GPT to provide case examples relevant to the questioner’s sector and region, whatever the topic. For instance, in five seconds or less, my GPT could find relevant case studies, for example, people tokenising their homes in Africa, organisations implementing effective diversity programmes in the candle-making sector or families 3D printing their own homes in the middle of a major city.
An added benefit of having such comprehensive content is that my AI assistant can provide near-instant written answers to all the questions that people never have the chance to ask in the session in their native language. Whether in person or as my virtual digital twin, my personal AI would be able to access and remember the names of everyone at the event to increase the personalisation of responses.
A combination of cameras and sensors focused on the audience would allow my AI to monitor participants’ microfacial expressions, interpret their speech patterns, and assess a range of visually observable biometrics like eye movement and breathing patterns. My AI could use all this data and insight to answer questions in a manner that considers the individual’s level of engagement.
The technology also now allows me to send personalised video responses to everyone who couldn’t ask questions during the session. Of course, it wouldn’t matter what language they spoke, as back in 2024, we already had tools that could translate text into a video of me saying something in 60 different languages simultaneously. Now, addressing a global audience is acceptable, regardless of the nationalities present. Each individual has a personalised version of me on their device screen, instantly translating and delivering the talk in the language they are most comfortable with.
Video storage is now so cheap that every session can be captured and tagged. So, delegates need never fear missing out on a session. Our personal AIs will be able to review the video, create edited highlights and summaries of the parts we find of most interest and value, tag them accordingly, and then incorporate all of this into our personal GPT for future use.
As a delegate, when watching any session, we tap our device to indicate a point we want to replay later because it is of particular interest or when we don’t quite comprehend the point being made. As we go through the session, we can also have our AI create an instant highlights reel based on when our eye movements, brain signals, and other biometric indicators suggest the greatest level of engagement.
The last two years have also seen significant advances in multi-sensory virtual reality/spatial computing headsets and the associated technologies. In 2026, these now allow us to go into experience zones to interact with virtual content provided by the speakers, participants, and exhibitors. Some of the main programme sessions can also be experienced in these zones so that we can engage with additional content. Imagine walking around a building or destination being discussed on stage or experiencing the new vehicle being launched. We can also now taste an entirely new product range without on-site preparation. Two thousand people can now be given a tasting experience as easily as twenty.
Finally, on the train journey home, we can review the edited highlights reel our AI has created and add appropriate voice notes so the AI can forward relevant segments to colleagues with whom we want to share the material. With the event wrap-up complete, our AI helps soothe us into a much-earned rest after an intense learning and connection experience.