Accepted Submissions
📜 Reachability Embeddings: Scalable Self-Supervised Representation Learning from Mobility Trajectories for Multimodal Computer Vision
📜 DAGMA: Learning DAGs via M-matrices and a Log-Determinant Acyclicity Characterization
🎤 Radically Lower Data-Labeling Costs for Document Extraction Models with Selective Labeling
📜 Randomized Exploration for Reinforcement Learning with General Value Function Approximation
📜 Multi-Frame Self-Supervised Depth with Transformers
📜 Learning Optical Flow, Depth, and Scene Flow without Real-World Labels
🎤 Self-Supervised Camera Self-Calibration from Video
🎤 Beyond Separability: Analyzing the Linear Transferability of Contrastive Representations to Related Subpopulations
📜 Few-shot Continual Learning using HyperTransformers
📜 iROAD: Learning an Implicit Recursive Octree Auto-Decoder to Efficiently Encode 3D Shapes
📜 Photo-realistic Neural Domain Randomization
📜 Contextual Mondegreen: Voice Query Transcriptions based on Contextual Signals
📜 Image Search with Text Feedback by Additive Attention Compositional Learning
📜 ShAPO: Implicit Representations for Multi-Object Shape, Appearance, and Pose Optimization
📜 RbX: Region-based explanations of prediction models
📜 Semi-Supervised Learning with Decision Trees: Graph Laplacian Tree Alternating Optimization
📜 Asynchronous Distributed Bayesian Optimization at HPC Scale
📜 Rewards Encoding Environment Dynamics (REED) Improves Preference-based Reinforcement Learning
📜 A Parametric Class of Approximate Gradient Updates for Policy Optimization
📜 FedEmbed: Personalized Private Federated Learning
📜 Beyond Tabula Rasa: Reincarnating Reinforcement Learning
📜 MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge
📜 SAVi++: Towards End-to-End Object-Centric Learning from Real-World Videos
📜 Physics-based Validation of Machine-learning Approaches for the COSI Space Mission
📜 ItemSage: Learning Product Embeddings for Shopping Recommendations at Pinterest
📜 Leveraging Unlabeled Data to Track Memorization
📜 Adaptation of Surgical Activity Recognition Models Across Operating Room
📜 Translation of Taxonomy Entities using Graph Neural Networks
📜 Towards Building Explainable-AI Systems Across LinkedIn: Key Challenges and Resolutions
📜 Results of the NeurIPS'22 Cross-Domain MetaDL Competition
📜 Learning differentiable solvers for systems with hard constraints
📜 Towards Multimodal Multitask Scene Understanding Model for Indoor Mobile Agents
📜 Developing a Machine Learning Mechanism for Selecting MRI Radiology Titles Using Electronic Medical Records
📜 Safe Real-World Reinforcement Learning for Mobile Agent Obstacle Avoidance
🎤 LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action
🎤 When can you trust your model's predictions? A Mistrust Scoring Framework for inference
📜 Self-Supervision for Scene Graph Embeddings
📜 One-class recommendation systems with the hinge pairwise distance loss and orthogonal representations
📜 A Simple, Yet Effective Approach to Finding Biases in Code Generation
📜 Data Feedback Loops: Model-driven Amplification of Dataset Biases
📜 On the impact of overfitting in learning to rank using a margin loss: a case study in job recommender systems
📜 Depth Field Networks for Generalizable Multi-view Scene Representation
📜 DANGER: A Framework of Danger-Aware Novel Dataset Generator Extension for Robustness Test of Machine Learning
📜 Posterior Sampling Model-based Policy Optimization
📜 Layerwise Training of Convex Convolutional Neural Networks with the Burer-Monteiro Factorization
📜 TE2Rules: Extracting Rule Lists from Tree Ensembles
📜 Medical Codes Prediction from Clinical Notes: From Human Coders to Machines
📜 Surrogate for Long-Term User Experience in Recommender Systems
🎤 Plex: Towards Reliability using Pretrained Large Model Extensions
📜 Beyond neural scaling laws: beating power law scaling via data pruning
📜 Accelerating Computational Chemistry with Machine Learning
Call for Abstracts
BayLearn 2022
The BayLearn 2022 abstract submission site is now open CLOSED for submissions:
BayLearn 2022 CMT
The abstract submission deadline is Thursday, July 14th, 2022 11:59 pm PDT Please submit abstracts as a 2-page pdf in NeurIPS format. An extra page for acknowledgements and references is allowed.
About BayLearn: The BayLearn Symposium is an annual gathering of machine learning researchers and scientists from the San Francisco Bay Area. While BayLearn promotes community building and technical discussions between local researchers from academic and industrial institutions, it also welcomes visitors. This one-day event combines invited talks, contributed talks, and posters, to foster exchange of ideas.
Meet with fellow Bay Area machine learning researchers and scientists during the symposium that will be held in mid October. Exact date to be decided
Feel free to circulate this invitation to your colleagues and relevant contacts.
Key Dates
- Thursday July 14th, 2022 at 11:59pm PDT - Abstract submission deadline
- Thursday Sept 15th, 2022 - Acceptance notifications
- October 20th, 2022 - BayLearn 2022 Symposium. We are planning for BayLearn 2022 to be an in-person event.
Submissions
We encourage submission of abstracts. Acceptable material includes work which has already been submitted or published, preliminary results and controversial findings. We do not intend to publish paper proceedings; only abstracts will be shared through an online repository. Our primary goal is to foster discussion! For examples of previously accepted talks, please watch the paper presentations from previous BayLearn Symposiums: https://baylearn.org/previous
For more information about submissions, please look here:https://baylearn.org/submissions
Submit your abstracts via CMT: BayLearn 2022 CMT
Mailing List: If this email was forwarded to you, and you would like to join the BayLearn mailing list so that you will receive future communications from us directly, please sign up.
Unsubscribe Note: you are receiving this e-mail because you have previously registered for, or registered interest in BayLearn. If you wish to no longer receive e-mails from BayLearn, please unsubscribe using this link: Unsubscribe
Best Regards,
The BayLearn Organizers