Industry Day
Industry Day
We bring together practitioners and researchers in the Information Retrieval domain to promote knowledge sharing and innovation across academia and industry. This Industry Day of ECIR 2024 will be held on Thursday March 28th 2024 in Glasgow, UK, immediately after the main conference program.
Industry Day Schedule
- 08:30 – 09:00 Breakfast tea / coffee
- 09:00 – 09:10 Opening
- 09:10 – 10:10 Ed Chi (Google)
- The LLM (Large Language Model) Revolution: Implications from Chatbots and Tool-use to Reasoning (Please see talk details below)
- 10:10 – 10:30 Paper session
- Augmenting Knowledge Graph Hierarchies using Neural Transformers
- Sanat Sharma, Tracy King, Jayant Kumar, Mayank Poddar and Kosta Blank (Adobe Inc.)
- Augmenting Knowledge Graph Hierarchies using Neural Transformers
- 10:30 – 11:00 Coffee
- 11:00 – 11:45 Jeff Dalton (University of Edinburgh and Bloomberg)
-
-
Generative AI in Finance: Automatic, Topic-based Summaries for Earnings Call Transcripts (Please see talk details below)
-
- 11:45 – 12:25 Paper session
- Semantic Content Search on IKEA.com
- Mateusz Slominski, Ezgi Yıldırım and Martin Tegner (IKEA Retail)
- Semantic Content Search on IKEA.com
-
- Incorporating Query Recommendation for Improving In-car Conversational Search
- Md Rashad Al Hasan Rony, Soumya Ranjan Sahoo*, Abbas Goher Khan*, Ken Friedl, Viju Sudhi* and Christian Suess (BMW Group, *Fraunhofer IAIS)
- Incorporating Query Recommendation for Improving In-car Conversational Search
- 12:30 – 13:30 Lunch
- 13:30 – 14:15 Mounia Lalmas (Spotify)
-
- AI for Search and Recommendations – Examples from Spotify (Please see talk details below)
- 14:15 – 14:55 Paper session
- Let’s Get It Started: Fostering the Discoverability of New Releases on Deezer
- Léa Briand, Théo Bontempelli, Walid Bendada, Mathieu Morlon, François Rigaud, Benjamin Chapus, Thomas Bouabça and Guillaume Salha-Galvan (Deezer Research)
- Let’s Get It Started: Fostering the Discoverability of New Releases on Deezer
-
- NCS4CVR: Neuron-Connection Sharing for Multi-Task Learning in Video Conversion Rate Prediction
- Xuanji Xiao, Jimmy Chen, Yuzhen Liu, Xing Yao, Pei Liu and Chaosheng Fan (Tencent Video)
- NCS4CVR: Neuron-Connection Sharing for Multi-Task Learning in Video Conversion Rate Prediction
- 15:00 – 15:30 Coffee
- 15:30 – 16:15 Ben Allison (Amazon) Please see talk details below.
-
- Making Decisions in Sponsored Advertising (Please see talk details below)
- 16:15 – 16:35 Paper session
- Variance Reduction in Ratio Metrics for Efficient Online Experiments
- Shubham Baweja, Neeti Pokharna, Aleksei Ustimenko and Olivier Jeunen (ShareChat)
- Variance Reduction in Ratio Metrics for Efficient Online Experiments
- 16:35 Closing
Industry Day Keynote Speakers:
-
Ben Allison
Applied ML Scientist at AmazonMaking Decisions in Sponsored Advertising
Modern advertising systems are built on a combination of machine learning, distributed systems, statistics, and game theory. Recent work in recommender systems has illuminated the distinction between the prediction paradigm (which underlies much of machine learning) and the decision paradigm which accomodates feedback loops and partial observability. Advertising is further subject to strategic actors and marketplace-level dynamics. Our disciplines lack many of the formal tools to reason about large marketplaces that are intermediated by machine learning models.In this talk I’ll give an introduction to our advertising domain at Amazon, before moving on to discuss recent work on decision making in ads: learning to bid, and learning to conduct auctions. In both cases we combine machine learning, counterfactual reasoning and optimized decision making with strategyproof-ness and incentive compatibility, highlighting the opportunities for exciting new work at the intersection of multiple disciplines.
Ben is a machine learning scientist that was worked across a range of applied ML problems at Amazon and beyond. He currently runs the ad serving and monetization org for Amazon’s performance brand advertising products. He got his PhD from the University of Sheffield in Machine Learning and NLP before doing a post-doc at the University of Edinburgh. Since coming to Amazon he’s worked in catalog, personalization, and now ads for the last 5 years. He enjoys multi-disciplinary challenges and collaborations that span machine learning, statistics, causal inference and optimization and has worked extensively on Deep Learning, Reinforcement Learning, NLP and Recommender Systems as well as the core ad tech problems.
-
Mounia Lalmas
Senior Research Director at SpotifyMounia Lalmas is a Senior Director of Research at Spotify, and the Head of Tech Research in Personalisation, where she leads an interdisciplinary team of research scientists, working on personalization. Mounia also holds an honorary professorship at University College London. She also holds an additional appointment as a Distinguished Research Fellow at the University of Amsterdam. Before that, she was a Director of Research at Yahoo, where she led a team of researchers working on advertising quality. She also worked with various teams at Yahoo on topics related to user engagement in the context of news, search, and user-generated content. Prior to this, she held a Microsoft Research/RAEng Research Chair at the School of Computing Science, University of Glasgow. Before that, she was Professor of Information Retrieval at the Department of Computer Science at Queen Mary, University of London. She is regularly a senior programme committee member at conferences such as WSDM, KDD, WWW and SIGIR. She was programme co-chair for SIGIR 2015, WWW 2018 and WSDM 2020, and CIKM 2023.
-
Ed H. Chi
Research Scientist at Google DeepmindThe LLM (Large Language Model) Revolution: Implications from Chatbots and Tool-use to Reasoning
Deep learning is a shock to our field in many ways, yet still many of us were surprised at the incredible performance of Large Language Models (LLMs). LLM uses new deep learning techniques with massively large data sets to understand, predict, summarize, and generate new content. LLMs like ChatGPT and Bard have seen a dramatic increase in their capabilities—generating text that is nearly indistinguishable from human-written text, translating languages with amazing accuracy, and answering your questions in an informative way. This has led to a number of exciting research directions for chatbots, tool-use, and
reasoning:– Chatbots: LLM chatbots that are more engaging and informative than traditional chatbots. First, LLMs can understand the context of a conversation better than ever before, allowing them to provide more relevant and helpful responses. Second, LLMs enable more engaging conversations than traditional chatbots, because they can understand the nuances of human language and respond in a more natural way. For example, LLMs can make jokes, ask questions, and provide feedback. Finally, because LLM chatbots can hold conversations on a wide range of topics, they can eventually learn and adapt to the user’s individual preferences.
– Tool-use, Retrieval Augmentation and Multi-modality: LLMs are also being used to create tools that help us with everyday tasks. For example, LLMs can be used to generate code, write emails, and even create presentations. Beyond human-like responses in Chatbots, later LLM innovators realised LLM’s ability to incorporate tool-use, including calling search and recommendation engines, which means that they could effectively become human assistants in synthesising summaries from web search and recommendation results. Tool-use integration have also enabled multimodal capabilities, which means that the chatbot can produce text, speech, images, and video.
– Reasoning: LLMs are also being used to develop new AI systems that can reason and solve problems. Using Chain-of-Thought approaches, we have shown LLM’s ability to break down problems, and then use logical reasoning to solve each of these smaller problems, and then combine the solutions to reach the final answer. LLMs can answer common-sense questions by using their knowledge of the world to reason about the problem, and then use their language skills to generate text that is both creative and informative.In this talk, I will cover recent advances in these 3 major areas, attempting to draw connections between them, and paint a picture of where major advances might still come from. While the LLM revolution is still in its early stages, it has the potential to revolutionise the way we interact with AI, and make a significant impact on our lives.
Ed H. Chi is a Distinguished Scientist at Google DeepMind, leading machine learning research teams working on large language models (LaMDA/Bard), neural recommendations, and reliable machine learning. With 39 patents and ~200 research articles, he is also known for research on user behavior in web and social media. As the Research Platform Lead, he helped launched Bard, a conversational AI experiment, and delivered significant improvements for YouTube, News, Ads, Google Play Store at Google with >660 product improvements since 2013.
Prior to Google, he was Area Manager and Principal Scientist at Xerox Palo Alto Research Center‘s Augmented Social Cognition Group in researching how social computing systems help groups of people to remember, think and reason. Ed earned his 3 degrees (B.S., M.S., and Ph.D.) in 6.5 years from University of Minnesota. Inducted as an ACM Fellow and into the CHI Academy, he also received a 20-year Test of Time award for research in information visualization. He has been featured and quoted in the press, including the Economist, Time Magazine, LA Times, and the Associated Press. An avid golfer, swimmer, photographer and snowboarder in his spare time, he also has a blackbelt in Taekwondo.
-
Jeff Dalton
Associate Professor and Turing AI Fellow in the School of Informatics at the University of EdinburghGenerative AI in Finance: Automatic, Topic-based Summaries for Earnings Call Transcripts
This talk discusses important challenges for the real-world deployment of generative Large Language Models (LLMs) in the financial industry at Bloomberg. In the context of supporting financial research analysts performing research on companies and industries, we describe efforts to use Generative AI elements to generate summaries for earnings call transcripts. We conclude with directions for how this can be extended towards interactive search and recommendation, and the applied Information Retrieval research needed to support it.
Dr. Jeff Dalton is an Associate Professor and Turing AI Fellow in the School of Informatics at the University of Edinburgh where he leads the Generalized Representation and Information Learning Lab (GRILL) (https://grilllab.ai). He is also a Visiting Professor in the Bloomberg Search and AI Group working on applications of Large Language Models (LLMs) to finance. He completed his Ph.D. at the University of Massachusetts Amherst in the Center for Intelligent Information Retrieval and later worked on information extraction and NLP in Google Research. He was the lead organizer for the TREC Conversational Assistance Track (CAsT) (http://treccast.ai) and currently organizes the Interactive Knowledge Assistance (iKAT) track. He served as the faculty advisor for the University of Glasgow GRILLBot team in the 2022 and 2023 Alexa Prize Taskbot challenge, which won first and second prizes. He holds multiple patents in search, information extraction, and question answering.