I'm a bit of an eclectic mess 🙂 I've been a programmer, journalist, editor, TV producer, and a few other things.
I'm currently working on my second novel which is complete, but is in the edit stage. I wrote my first novel over 20 years ago but then didn't write much till now.
"ReDi: Efficient Learning-Free Diffusion Inference via Trajectory Retrieval. (arXiv:2302.02285v1 [cs.CV])" — A simple yet learning-free Retrieval-based Diffusion sampling framework capable of fast inference, which retrieves a trajectory similar to the partially generated trajectory from a precomputed knowledge base at an early stage of generation, skips a large portion of intermediate steps, and continues sampling from a later step in the retrieved trajectory.
"Design Booster: A Text-Guided Diffusion Model for Image Translation with Spatial Layout Preservation. (arXiv:2302.02284v1 [cs.CV])" — A new approach for flexible image translation by learning a layout-aware image condition together with a text condition, which co-encodes images and text into a new domain during the training phase.
"Divide and Compose with Score Based Generative Models. (arXiv:2302.02272v1 [cs.CV])" — Learning image components in an unsupervised manner in order to compose those components to generate and manipulate images in an informed manner.
"Real-Time Image Demoireing on Mobile Devices. (arXiv:2302.02184v1 [cs.CV])" — A study on accelerating demoireing networks and a dynamic demoireing acceleration method (DDA) capable of real-time deployment on mobile devices.
"PDEBENCH: An Extensive Benchmark for Scientific Machine Learning. (arXiv:2210.07182v4 [cs.LG] UPDATED)" — A benchmark suite of time-dependent simulation tasks based on Partial Differential Equations (PDEs), which comprises both code and data to benchmark the performance of novel machine learning models against both classical numerical simulations and machine learning baselines.
"Faster Attention Is What You Need: A Fast Self-Attention Neural Network Backbone Architecture for the Edge via Double-Condensing Attention Condensers. (arXiv:2208.06980v3 [cs.CV] UPDATED)" — A faster attention condenser design called double-condensing attention condensers that allow for highly condensed feature embeddings.
"IKEA-Manual: Seeing Shape Assembly Step by Step. (arXiv:2302.01881v1 [cs.CV])" — A dataset consisting of 102 IKEA objects paired with assembly manuals to help improve/test shape assembly activities since the manuals provide step-by-step guidance on how we should move and connect different parts in a convenient and physically-realizable way.
"BackdoorBox: A Python Toolbox for Backdoor Learning. (arXiv:2302.01762v1 [cs.CR])" — An open-sourced Python toolbox that implements representative and advanced backdoor attacks and defenses under a unified and flexible framework to help detect and possibly defend against backdoor attacks against deep neural networks (DNNs).
“Creating a Large Language Model of a Philosopher” — Tries to answer the question: “Can large language models be trained to produce philosophical texts that are difficult to distinguish from texts produced by human philosophers?” by fine-tuning OpenAI's GPT-3 with the works of philosopher Daniel C. Dennett.
“Towards Attention-aware Rendering for Virtual and Augmented Reality” — An attention-aware model of contrast sensitivity based on measuring contrast sensitivity under different attention distributions and discovering that sensitivity in the periphery drops significantly when the user is required to allocate attention to the fovea.
"The Learnable Typewriter: A Generative Approach to Text Line Analysis. (arXiv:2302.01660v1 [cs.CV])" — A generative document-specific approach to character analysis and recognition in text lines which builds on unsupervised multi-object segmentation methods, and in particular, those that reconstruct images based on a limited amount of visual elements, called sprites. This approach can learn a large number of different characters and leverage line-level annotations when available.
"Cluster-CAM: Cluster-Weighted Visual Interpretation of CNNs' Decision in Image Classification. (arXiv:2302.01642v1 [cs.CV])" — An effective and efficient gradient-free Convolutional Neural Network (CNN) interpretation algorithm which can significantly reduce the times of forward propagation by splitting the feature maps into clusters in an unsupervised manner.
"CTE: A Dataset for Contextualized Table Extraction. (arXiv:2302.01451v1 [cs.CL])" — A task which aims to extract and define the structure of tables considering the textual context of the document providing data that helps you with document layout analysis and table understanding using a dataset which comprises 75k fully annotated pages of scientific papers, including more than 35k tables.
"Understanding and contextualising diffusion models. (arXiv:2302.01394v1 [cs.CV])" — An explanation about how common diffusion models work by focusing on the mathematical theory behind them, i.e. without analyzing in detail the specific implementations and related methods.
"Bayesian Metric Learning for Uncertainty Quantification in Image Retrieval. (arXiv:2302.01332v1 [cs.LG])" — A Bayesian encoder for metric learning which, rather than relying on neural amortization as done in prior works, learns a distribution over the network weights with the Laplace Approximation.
"Dual PatchNorm. (arXiv:2302.01327v1 [cs.CV])" — Experiments with adding two Layer Normalization layers (LayerNorms), before and after the patch embedding layer in Vision Transformers, to see how it affects accuracy.
"Deep reinforcement learning for the olfactory search POMDP: a quantitative benchmark" — Using deep reinforcement learning to search for a source of odor in turbulence, as applicable to sniffer robots.
"Why Combining Text and Visualization Could Improve Bayesian Reasoning: A Cognitive Load Perspective" — An examination of the cognitive load elicited when solving Bayesian problems using icon arrays, text, and a juxtaposition of text and icon arrays.