I'm a bit of an eclectic mess 🙂 I've been a programmer, journalist, editor, TV producer, and a few other things.
I'm currently working on my second novel which is complete, but is in the edit stage. I wrote my first novel over 20 years ago but then didn't write much till now.
"Continuous Spatiotemporal Transformers. (arXiv:2301.13338v1 [cs.LG])" — A new transformer architecture that is designed for the modeling of continuous systems which guarantees a continuous and smooth output via optimization in Sobolev space.
"ERA-Solver: Error-Robust Adams Solver for Fast Sampling of Diffusion Probabilistic Models. (arXiv:2301.12935v2 [cs.LG] UPDATED)" — An error-robust Adams solver (ERA-Solver), which utilizes the implicit Adams numerical method that consists of a predictor and a corrector.
"PromptMix: Text-to-image diffusion models enhance the performance of lightweight networks. (arXiv:2301.12914v2 [cs.CV] UPDATED)" — A method for artificially boosting the size of existing datasets, that can be used to improve the performance of lightweight networks.
"MOAT: Alternating Mobile Convolution and Attention Brings Strong Vision Models. (arXiv:2210.01820v2 [cs.CV] UPDATED)" — A family of neural networks that build on top of mobile convolution (i.e., inverted residual blocks) and attention which not only enhances the network representation capacity, but also produces better downsampled features.
"DAG: Depth-Aware Guidance with Denoising Diffusion Probabilistic Models. (arXiv:2212.08861v2 [cs.CV] UPDATED)" — A guidance method for diffusion models that uses estimated depth information derived from the rich intermediate representations of diffusion models.
"Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis. (arXiv:2212.05032v2 [cs.CV] UPDATED)" — Improving the compositional skills of text-to-image models; specifically, obtainining more accurate attribute binding and better image compositions by incorporating linguistic structures with the diffusion guidance process based on the controllable properties of manipulating cross-attention layers in diffusion-based models.
"Refining Generative Process with Discriminator Guidance in Score-based Diffusion Models. (arXiv:2211.17091v2 [cs.CV] UPDATED)" — A generative SDE with score adjustment using an auxiliary discriminator with the goal of improving the original generative process of a pre-trained diffusion model by estimating the gap between the pre-trained score estimation and the true data score.
"Learning on tree architectures outperforms a convolutional feedforward network. (arXiv:2211.11378v3 [cs.CV] UPDATED)" — A 3-layer tree architecture inspired by experimental-based dendritic tree adaptations is developed and applied to the offline and online learning of the CIFAR-10 database to show that this architecture outperforms the achievable success rates of the 5-layer convolutional LeNet.
"Continual Learning by Modeling Intra-Class Variation. (arXiv:2210.05398v2 [cs.LG] UPDATED)" — An examination of memory-based continual learning which identifies that large variation in the representation space is crucial for avoiding catastrophic forgetting.
"Scalable and Equivariant Spherical CNNs by Discrete-Continuous (DISCO) Convolutions. (arXiv:2209.13603v3 [cs.CV] UPDATED)" — A hybrid discrete-continuous (DISCO) group convolution for spherical convolutional neural networks (CNN) that is simultaneously equivariant and computationally scalable to high-resolution.
"Multi-Level Visual Similarity Based Personalized Tourist Attraction Recommendation Using Geo-Tagged Photos. (arXiv:2109.08275v2 [cs.MM] UPDATED)" — A geo-tagged photo based tourist attraction recommendation system which utilizes the visual contents of photos and interaction behavior data to obtain the final embeddings of users and tourist attractions, which are then used to predict the visit probabilities.
"BiAdam: Fast Adaptive Bilevel Optimization Methods. (arXiv:2106.11396v3 [math.OC] UPDATED)" — A novel fast adaptive bilevel framework to solve stochastic bilevel optimization problems that the outer problem is possibly nonconvex and the inner problem is strongly convex.
"Sparse Oblique Decision Trees: A Tool to Understand and Manipulate Neural Net Features. (arXiv:2104.02922v2 [cs.LG] UPDATED)" — An effort to understanding which of the internal features computed by the neural net are responsible for a particular class, by mimicking part of the neural net with an oblique decision tree having sparse weight vectors at the decision nodes.
"Don't Play Favorites: Minority Guidance for Diffusion Models. (arXiv:2301.12334v1 [cs.LG])" — A framework that can make the generation process of the diffusion models focus on the minority samples, which are instances that lie on low-density regions of a data manifold.
"SEGA: Instructing Diffusion using Semantic Dimensions. (arXiv:2301.12247v1 [cs.CV])" — A semantic guidance method for diffusion models to allow making subtle and extensive edits and changes in composition and style, as well as optimize the overall artistic conception.
"Anticipate, Ensemble and Prune: Improving Convolutional Neural Networks via Aggregated Early Exits. (arXiv:2301.12168v1 [cs.LG])" — A new training technique based on weighted ensembles of early exits, which aims at exploiting the information in the structure of networks to maximise their performance.
"ClusterFuG: Clustering Fully connected Graphs by Multicut. (arXiv:2301.12159v1 [cs.CV])" — A simpler and potentially better performing graph clustering formulation based on multicut (a.k.a. weighted correlation clustering) on the complete graph.
"Towards Equitable Representation in Text-to-Image Synthesis Models with the Cross-Cultural Understanding Benchmark (CCUB) Dataset. (arXiv:2301.12073v1 [cs.CV])" — A culturally-aware priming approach for text-to-image synthesis using a small but culturally curated dataset to fight the bias prevalent in giant datasets.
"Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning. (arXiv:2301.12025v1 [cs.CV])" — A novel self-supervised learning approach that leverages Transformer and CNN simultaneously to overcome the issues with existing self-supervised techniques which have extreme computational requirements and suffer a substantial drop in performance with a reduction in batch size or pretraining epochs.
"Improved knowledge distillation by utilizing backward pass knowledge in neural networks. (arXiv:2301.12006v1 [cs.LG])" — Addressing the issue with Knowledge Distillation (KD) where there is no guarantee that the model would match in areas for which you do not have enough training samples, by generating new auxiliary training samples based on extracting knowledge from the backward pass of the teacher in the areas where the student diverges greatly from the teacher.