top of page

Getting Smaht on AI - 2023 Q4 Event Recap



Boston Smaht wrapped up the year diving into the critical topic of AI and its impact on real estate. The Boston office of JLL opened its doors, hosting our panel of industry thought leaders and our great community. Thank you to JLL (especially Danny Klein) and Clockworks for hosting and providing food and beverages as official sponsors.


Our objective was a frank and open discussion on the AI journey ahead for the industry. The three panelists all offered their insights and ‘stirred the pot’ in provoking a conversation that could have lasted hours.


  • Phoebe Holtzman - Global Director of Data Science & Innovation, Research at JLL

  • Justin Segal - President - Boxer Property. Founder - Stemmons Enterprise

  • James Scott - Director, MIT/CRE Real Estate Technology Initiative


The format of the event was for each speaker to have 15min to share their respective points of views followed up with a Q&A session. Before we get into a recap of what was shared, we had a drop in from our friends Howard Berger and

Jim Young from Realcomm. It was a pleasure having them take a few moments out of their busy schedule at Realcomm’s inaugural BuildingsAI event. It was great to connect two rooms on opposite coasts with RE leaders in each talking about the important topic of AI. I had a little FOMO from the lineup of use cases they were about to hear about after hanging up with us. Definitely won’t schedule conflicting events next year.


So, starting with Phoebe, she is leading some of JLLs efforts around the potential uses of AI. She had some great insights into what they have figured out along this journey. One of the important points she made was about the need to have an organized data set in order to obtain meaningful results. This was the first mention of the night into the very critical theme of data governance that flowed throughout the evening. There are also considerations that are key with regards to where do you put the data that is being used to train the models or deliver outputs. There are benefits to having Generative AI tools housing Large Language Models (LLMs) within more private cloud instances such as your own organizations cloud or a service provider. One of the immediate benefits she spoke about was the ability for their JLLgpt to help get people past the challenge of creating something from a blank page. We have all sat there procrastinating faced with the challenge of where to begin a document, presentation, or analysis. I would completely agree. I wish someone else wrote this summary so I could edit it and then send it out. I much prefer fixing something than starting from scratch. Phoebe also hinted at something we are going to have to look into as we adopt these AI tools. The compute power required to deliver the outputs require significant data center infrastructure. What is the carbon impact to asking ChatGPT to write you a summary of a large document. It also might be fun to ask for a cocktail recipe, but is that a sustainable use of technology when we can easily look at a recipe card or do a more traditional search? Definitely something to dig into as we understand this space more and the potential tradeoffs.


Justin started taking us into a practical understanding of AI working towards providing some examples of the way Boxer leverages it today with the Stemmons platform. Starting with some terminology and architecture, I’ll land on one of the first key takeaways in my head. The personal vs. enterprise application of AI. Think about the personal one first. Basically, an individual has an idea, leverages the AI, transcribes the output to a document or decides what to do with it. It’s a one to one input/outcome and while it adds efficiency, it is not scalable. Enterprise is where a condition or action triggers an API to leverage the AI to create another reaction. Creating this gives you a very scalable approach to greatly impact operations. There were a lot of great recommendations on how to approach AI and achieve your goals. If you were at the event, you know Justin will share it with you, just have to ask. Some of the insights included…

  • Assembling the right “deck” of AI tools aligned with your goals. Depending on your strategy you might want some generative, some ML, some edge and some cloud.

  • The small step approach to take small risks with some reward vs a long development strategy before seeing huge returns.

  • Mixing data governance, workforce globalization, and AI to achieve your goals.

  • As well as the cheat sheet on what to consider for each of those.

Justin wrapped up his session with examples of how to leverage these toolsets. Examples included using AI and their deal history to create a tool that helps show them deals that have the highest probability of success based on their history. Helps weed out things that could be a distraction and only focus on higher probabilities. Others included Renewal/default predictions, Content summarization, and one that sounds great, but will let Justin tell you about it.


Rounding out the evening was James Scott who completely failed at his task of scaring us out of pursuing this AI thing. This paragraph might be a little shorter than others, because I was busy listening versus taking down notes to recap. James helped calm some fears by clarifying the difference of Artificial General Intelligence vs Artificial Intelligence. The former being more inline with what we expect from Cyberdyne in the Terminator movies where the AI is able to train itself on all of the information to develop capabilities unexpected by man. Since 1967, AGI should be arriving at any moment between the next 2 or 200 years. WIth AI, we are typically giving models a set of training data with rewards associated with finding various outcomes. James spent some time talking about bias issues, specifically around trying to do things like find reliable renters. If the data suggests that people from a certain zip code have more defaults, lower credit ratings, and that zip code might be predominately a specific background or skin color, the output of that could unfairly skew results against them. People creating the data sets and incentives for training these models need to be cognizant of these biases and test for them in the outputs to help ensure that our use of AI is done ethically. Following that James hit on the challenging topic of what impact will AI have on jobs. There was reference to roughly 83 million jobs estimated to be eliminated by AI with it only creating 69 million new jobs. The feeling, however, is that this is not the complete picture as historically tech that made people more efficient created job growth overall. It is to be said though, that this will not be without some painful retraining and shifts in career paths. Finally with the shift in potential work as well as introductions of tech like autonomous vehicles, we talked a bit about what does the future of our cities look like. How will real estate change.


It was a very thought provoking evening with a lot learned along the way in only an hour. As I said, we could have spent the whole night on this. If you are in the Boston Smaht community, I hope this encourages you to engage in the dialog we can have in our LinkedIn group. If you are reading this from another part of the country or globe, I encourage you to check out Building Intelligence Group for a local chapter to connect with and/or check in with our friends at Realcomm to get pulled into the conversation. AI isn’t coming…it’s here.




83 views0 comments

Recent Posts

See All
bottom of page