Accepted Submissions (đ€ = oral)
CALID: Collabrative Accelerate LLM Inference with Draft Model with Filter Decoding
SeMAnD: Self-Supervised Anomaly Detection in Multimodal Geospatial Datasets
Cubist-style image effects with oblique decision trees
Improving the Faithfulness of LLM-based Abstractive Summarization with Span-level Unlikelihood Training
Hierarchical data visualization via PCA trees
Leveraging Spiking Neural Networks for Solar Energy Prediction in Agriculture
IntentRec: Predicting User Session Intent in Netflix
AFEN: Respiratory Disease Classification using Audio Machine Learning
Data Efficiency for Large Recommendation Models
đ€ Integration of Graph Neural Network and Neural-ODEs for Tumor Dynamics Prediction
Can Large Language Models Explain Themselves? A Study of LLM-Generated Self-Explanations
AutoEvalTTS: An Automatic Evaluation Framework for Text and Audio Assessment
Adaptive Softmax Trees for Many-Class Classification
đ€ Learning to Route with Confidence Tokens
Decima: Decoding gene expression in individual cell types and disease states
đ€ Visual Haystacks: Answering Hard Questions About Sets of Images
Tool-Augmented Compositional Reasoning LLMs with Weak Supervision: A Scalable Approach to Reduce Human Efforts in Agent Customization
Right this way: Can VLMs Guide Use to See More to Answer Questions?
Beyond Item Dissimilarities: Diversifying by Intent in Recommender Systems
Improved Microbiome Prediction through Functional Tree Input for Convolutional Neural Networks
Lucy: Think and Reason to Solve Text-to-SQL
Detection Machine Revised Text via Style Preference Optimization
Learning by Aligning 2D Skeleton Sequences and Multi-Modality Fusion
End-To-End Recommendation Systems with Hybrid Graph Neural Networks
Evaluating Gender Bias Transfer between Pretrained and Prompt Adapted Language Models
đ€ Lemur: Integrating Large Language Models in Automated Program Verification
BayesCNS: A Unified Bayesian Approach to Address Cold Start and Non-Stationarity in Large-Scale Search Systems
Unsupervised End-to-End Task-Oriented Dialogue with LLMs: The Power of the Noisy Channel
Early Task-Adaptation of Language Models via Importance Sampling during Pretraining
Open-Source Molecular Processing Pipeline for Generating Molecules
Sigmoid Self-Attention
Utilizing Surrogate Modeling and Evolutionary Policy Search to Discover Effective Policies for Land-Use Planning
đ€ Teaching an LLM To Explore Optimally
Generative AI with Logical Reasoning
Enhanced Guardrails for Data Security in LLMs
Encoded Modern Hopfield Networks - đ€ Addressing Practical Considerations for Large Scale Storage
Distribution Agnostic Regression Paradigm for Watch Time Fitting and Prediction
Enhancing MRI Abdominal Protocol Selection with a Machine-Learning Decision-Support System Utilizing Electronic Health Records
Improving Open Vocabulary Tagging using Semantic Label Clustering and Maximum Bipartite Matching
Hybrid LLM Architecture for Advanced In-Vehicle Voice Assistants
Slug Mobile: Test-Bench for RL Testing
Enhancing Temporal Activity Localization through Multimodal Large Language Models
Does the âmost sinfully decadent cake everâ taste good? Answering Yes/No Questions from Figurative Contexts
Encoding Matters: Impact of Categorical Variable Encoding on Performance and Bias
Combating Music Streaming Manipulation Fraud With Machine Learning
đ€ Do Music Generation Models Encode Music Theory?
Call for Abstracts
BayLearn 2024 will be an in-person event hosted at Cupertino, CA.
The BayLearn 2024 abstract submission site is now open for submissions:
https://baylearn.org/submissions
The abstract submission deadline has been extended to Aug 5th, 2024 11:59pm PDT.
Please submit abstracts as a 2-page PDF in NeurIPS format. An extra page for acknowledgements and references is allowed.
About BayLearn
The BayLearn Symposium is an annual gathering of machine learning researchers and scientists from the San Francisco Bay Area. While BayLearn promotes community building and technical discussions between local researchers from academic and industrial institutions, it also welcomes visitors. This one-day event combines invited talks, contributed talks, and posters, to foster exchange of ideas.
Meet with fellow Bay Area machine learning researchers and scientists during the symposium that will be held on Thursday, October 10th, 2024
Feel free to circulate this invitation to your colleagues and relevant contacts.
Key Dates
- Abstract submission deadline--Extended...Now: Aug 5th, 2024 11:59pm PDT
- Acceptance notifications: Sep 9th, 2024
- BayLearn 2024 Symposium: Thursday, October 10th, 2024
We are planning for BayLearn 2024 to be a purely in-person (NOT hybrid) event at Cupertino. Details to be announced.
Submissions
We encourage submission of abstracts. Acceptable material includes work which has already been submitted or published, preliminary results, and controversial findings. We do not intend to publish paper proceedings; only abstracts will be shared through an online repository. Our primary goal is to foster discussion! For examples of previously accepted talks, please watch the paper presentations from previous BayLearn Symposiums: https://baylearn.org/previous
For more information about submissions, please look here:
https://baylearn.org/submissions
Submit your abstracts via CMT:
https://cmt3.research.microsoft.com/BAYLEARN2024
Please use the NeurIPS submission format: https://neurips.cc/Conferences/2023/PaperInformation/StyleFiles