The AICommunityOWL is a private, independent network of AI enthusiasts. It was founded in 2020 by employees of Fraunhofer IOSB-INA, the OWL University of Applied Sciences (TH OWL), the Centrum Industrial IT (CIIT) and Phoenix Contact. Together, they believe in digital progress through the use of machine learning. They want to create sustainable solutions for the challenges of the future: industry, mobility, smart buildings and smart cities – and above all, for people!
The Machine Learning Reading Group (MLRG) of the AICommunityOWL has the goal to get a better understanding of current trends in machine learning on a technical level. The target audience are researchers and practitioners in the field of machine learning. We read and discuss current papers with a high media impact or prominent positioning (at least orals) of the leading conferences, e.g. NeurIPS, ICML, ICLR, AISTATS, UAI, COLT, KDD, AAAI, CVPR, ACL, or IJCAI. Attendees are expected to have read (or skimmed) the papers that are going to be presented so as not to be thrown off by the notation or problem statement and to be able to participate in informed discussions related to the paper. Suggestions for future papers are encouraged, as are volunteer presenters.
We hold our next online meeting on Tuesday, July 6th, at 16:00 under this link.
Don’t miss the date and save the event to your calendar:
Next Session Title:
Learning General Visual Representations
In the quest for the best generic visual representation (“vision backbone”), I have landed at large-scale pre-training and transfer. This talk will walk through some highlights of this journey, starting at a clear definition of the setting (Visual Task Adaptation Benchmark – VTAB, arxiv.org/abs/1910.04867), going in-depth on our first breakthrough result in large-scale pre-training (Big Transfer – BiT, arxiv.org/abs/1912.11370), and a more recent one applying the transformer architecture to images (Vision Transformer – ViT, arxiv.org/abs/2010.11929) as well as wondering, whether this all is actually meaningful anymore (Are we done with ImageNet, arxiv.org/abs/2006.07159).
Lucas Beyer (Google Brain)
For questions or suggestions of topics, feel free to contact firstname.lastname@example.org