Accepted Submissions (🎤 = oral)
Demon in the machine: learning to extract work and absorb entropy from fluctuating nanosystems
Temporal Embeddings: Scalable Self-Supervised Temporal Representation Learning from Spatiotemporal Data for Multimodal Computer Vision
🎤 DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining
Modeling Electronic Health Records for Predicting MRI Abdominal Protocols
Saturn: Efficient Multi-Large-Model Deep Learning
Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training
Ranking Binary Functions without Training
🎤 DORSal: Diffusion for Object-centric Representations of Scenes et al.
Probabilistic Modeling for Mixed-Variable Sequences
Fair Augmentation of Decision Trees Through Selective Node Retraining
Teaching Algorithmic Reasoning via In-context Learning
IDMU: Impact Driven Machine Unlearning
GKD: Generalized Knowledge Distillation for Auto-regressive Sequence Models
Applying Policy Gradient Methods to Image-Based Autonomous Vehicles
CoarsenConf: Equivariant Coarsening with Aggregated Attention for Molecular Conformer Generation
Exploring Zero and Few-shot Techniques for Intent Classification in Conversational AI
Automatic Creative Selection with Cross-Modal Matching
Cost-sensitive learning of classification trees, with application to imbalanced datasets
REALM: Robust Entropy Adaptive Loss Minimization for improved single-sample test-time adaptation
Viewpoint Equivariance for Multi-View 3D Object Detection
Deep Metric Learning with Soft Orthogonal Proxies
Bivariate decision trees
NeRFuser: Scalable Scene Representation by NeRF Registration and Blending
🎤 What Makes ImageNet Look Unlike LAION
Provable Robust Watermarking for AI-Generated Text
iSCAN: Identifying Causal Mechanism Shifts among Nonlinear Additive Noise Models
IC3: Image Captioning by Committee Consensus
Generative Autoencoders as Watermark Attackers: Analyses of Vulnerabilities and Threats
Federated Learning of Gboard Language Models with Differential Privacy
regGPT: Integrating autoregressive DNA language models and supervised models to design realistic regulatory DNA
Learning stochastic dynamics and predicting emergent behavior using transformers
Quantum speedups for stochastic optimization
Field Evaluation of a Machine Learning Decision Support Tool for Traffic Flow Management at Dallas Fort Worth International Airport
Towards Zero-Shot Scale-Aware Monocular Depth Estimation
Pros and cons of soft vs hard decision trees
Influence of Variable Encoding on Group Fairness in the Presence of Shortcut Learning
🎤 ViNT: A Foundation Model for Visual Navigation
GPU-Accelerated WFST Beam Search Decoder for CTC-based Speech Recognition
Stylistic Mastery: Unleashing the Potential of Style Embedding Intervention for Formality-Controlled Spoken Language Translations
Starpoint: A simple and scalable database for embeddings
RLAR: Reinforcement Learning on Agent-specific Reasoning for Large Language Model
Evolving HyperTransformer Policy Generators for Meta-Reinforcement Learning
Multimodal Open Domain QA using Retrieval Augmentation
Trinity: A No-Code AI platform for complex spatial datasets
OPERA: Offline Policy Evaluation with Re-weighted Aggregates of Multiple Estimators
Play to Teach: Adversarial Cooperative Knowledge Transfer in Two-Player Games
Modular Adaptive Depth Networks for Generalizing Over the Number of Hops
Call for Abstracts
BayLearn 2023
BayLearn 2023 will be an in-person event hosted in the San Francisco Bay Area.
The BayLearn 2023 abstract submission site is now open for submissions:
https://baylearn.org/submissions
The abstract submission deadline is Thursday, July 13, 2023 11:59pm PDT.
Please submit abstracts as a 2-page PDF in NeurIPS format. An extra page for acknowledgements and references is allowed.
About BayLearn
The BayLearn Symposium is an annual gathering of machine learning researchers and scientists from the San Francisco Bay Area. While BayLearn promotes community building and technical discussions between local researchers from academic and industrial institutions, it also welcomes visitors. This one-day event combines invited talks, contributed talks, and posters, to foster exchange of ideas.
Meet with fellow Bay Area machine learning researchers and scientists during the symposium that will be held in mid October–The exact date to be announced.
Feel free to circulate this invitation to your colleagues and relevant contacts.
Key Dates
- Abstract submission deadline (EXTENDED): Thursday, July 13, 2023 11:59pm PDT
- Acceptance notifications: Wednesday, September 20th, 2023
- BayLearn 2023 Symposium: Thursday, October 19th, 2023 - We are planning for BayLearn 2023 to be a purely in-person (NOT hybrid) event at a venue in the East Bay. Details to be announced.
Submissions
We encourage submission of abstracts. Acceptable material includes work which has already been submitted or published, preliminary results, and controversial findings. We do not intend to publish paper proceedings; only abstracts will be shared through an online repository. Our primary goal is to foster discussion! For examples of previously accepted talks, please watch the paper presentations from previous BayLearn Symposiums: https://baylearn.org/previous
For more information about submissions, please look here:
https://baylearn.org/submissions
Submit your abstracts via CMT:
https://cmt3.research.microsoft.com/BAYLEARN2023
Mailing List: If this email was forwarded to you, and you would like to join the BayLearn mailing list so that you will receive future communications from us directly, please sign up here.
Unsubscribe Note: you are receiving this e-mail because you have previously registered for, or registered interest in BayLearn. If you wish to no longer receive e-mails from BayLearn, please unsubscribe using this link: Unsubscribe
Best Regards,
The BayLearn Organizers