I'm a bit of an eclectic mess ๐ I've been a programmer, journalist, editor, TV producer, and a few other things.
I'm currently working on my second novel which is complete, but is in the edit stage. I wrote my first novel over 20 years ago but then didn't write much till now.
"Don't Play Favorites: Minority Guidance for Diffusion Models. (arXiv:2301.12334v1 [cs.LG])" โ A framework that can make the generation process of the diffusion models focus on the minority samples, which are instances that lie on low-density regions of a data manifold.
"SEGA: Instructing Diffusion using Semantic Dimensions. (arXiv:2301.12247v1 [cs.CV])" โ A semantic guidance method for diffusion models to allow making subtle and extensive edits and changes in composition and style, as well as optimize the overall artistic conception.
"Anticipate, Ensemble and Prune: Improving Convolutional Neural Networks via Aggregated Early Exits. (arXiv:2301.12168v1 [cs.LG])" โ A new training technique based on weighted ensembles of early exits, which aims at exploiting the information in the structure of networks to maximise their performance.
"ClusterFuG: Clustering Fully connected Graphs by Multicut. (arXiv:2301.12159v1 [cs.CV])" โ A simpler and potentially better performing graph clustering formulation based on multicut (a.k.a. weighted correlation clustering) on the complete graph.
"Towards Equitable Representation in Text-to-Image Synthesis Models with the Cross-Cultural Understanding Benchmark (CCUB) Dataset. (arXiv:2301.12073v1 [cs.CV])" โ A culturally-aware priming approach for text-to-image synthesis using a small but culturally curated dataset to fight the bias prevalent in giant datasets.
"Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning. (arXiv:2301.12025v1 [cs.CV])" โ A novel self-supervised learning approach that leverages Transformer and CNN simultaneously to overcome the issues with existing self-supervised techniques which have extreme computational requirements and suffer a substantial drop in performance with a reduction in batch size or pretraining epochs.
"Improved knowledge distillation by utilizing backward pass knowledge in neural networks. (arXiv:2301.12006v1 [cs.LG])" โ Addressing the issue with Knowledge Distillation (KD) where there is no guarantee that the model would match in areas for which you do not have enough training samples, by generating new auxiliary training samples based on extracting knowledge from the backward pass of the teacher in the areas where the student diverges greatly from the teacher.
"RGB Arabic Alphabets Sign Language Dataset. (arXiv:2301.11932v1 [cs.CV])" โ An Arabic Alphabet Sign Language (AASL) dataset comprising of 7,856 raw and fully labelled RGB images of the Arabic sign language alphabets which might be the first such publicly available dataset.
"Input Perturbation Reduces Exposure Bias in Diffusion Models. (arXiv:2301.11706v1 [cs.LG])" โ An exploration of the fact that the the long sampling chain in Denoising Diffusion Probabilistic Models (DDPM) leads to an error accumulation phenomenon, which is similar to the exposure bias problem in autoregressive text generation.
"Image Restoration with Mean-Reverting Stochastic Differential Equations. (arXiv:2301.11699v1 [cs.LG])" โ A stochastic differential equation (SDE) approach for general-purpose image restoration which can restore images without relying on any task-specific prior knowledge.
"Accelerating Guided Diffusion Sampling with Splitting Numerical Methods. (arXiv:2301.11558v1 [cs.CV])" โ A solution to speeding up guided diffusion image generation based on operator splitting methods, motivated by the finding that classical high-order numerical methods are unsuitable for the conditional function.
"3DShape2VecSet: A 3D Shape Representation for Neural Fields and Generative Diffusion Models. (arXiv:2301.11445v1 [cs.CV])" โ A novel shape representation for neural fields designed for generative diffusion models, which can encode 3D shapes given as surface models or point clouds, and represents them as neural fields.
"Improving Cross-modal Alignment for Text-Guided Image Inpainting. (arXiv:2301.11362v1 [cs.CV])" โ A model for text-guided image inpainting by improving cross-modal alignment (CMA) using cross-modal alignment distillation and in-sample distribution distillation.
"Rethinking 1x1 Convolutions: Can we train CNNs with Frozen Random Filters?. (arXiv:2301.11360v1 [cs.CV])" โ An exploration into whether Convolutional Neural Networks (CNN) learning the weights of vast numbers of convolutional operators is really necessary.
"Multimodal Event Transformer for Image-guided Story Ending Generation. (arXiv:2301.11357v1 [cs.CV])" โ A multimodal event transformer, an event-based reasoning framework for image-guided story ending generation which constructs visual and semantic event graphs from story plots and ending image, and leverages event-based reasoning to reason and mine implicit information in a single modality.
"Animating Still Images. (arXiv:2209.10497v2 [cs.CV] UPDATED)" โ A method for imparting motion to a still 2D image which uses deep learning to segment part of the image as the subject, uses in-paining to complete the background, and then adds animation to the subject.
"VAuLT: Augmenting the Vision-and-Language Transformer for Sentiment Classification on Social Media. (arXiv:2208.09021v3 [cs.CV] UPDATED)" โ An extension of the popular Vision-and-Language Transformer (ViLT) to improve performance on vision-and-language (VL) tasks that involve more complex text inputs than image captions while having minimal impact on training and inference efficiency.
"SIViDet: Salient Image for Efficient Weaponized Violence Detection. (arXiv:2207.12850v4 [cs.CV] UPDATED)" โ A new dataset that contains videos depicting weaponized violence, non-weaponized violence, and non-violent events; and a proposal for a novel data-centric method that arranges video frames into salient images while minimizing information loss for comfortable inference by SOTA image classifiers.
"BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning. (arXiv:2206.08657v3 [cs.CV] UPDATED)" โ A proposal for multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the cross-modal encoder.
"Text-To-4D Dynamic Scene Generation. (arXiv:2301.11280v1 [cs.CV])" โ A method for generating three-dimensional dynamic scenes from text descriptions which uses a 4D dynamic Neural Radiance Field (NeRF), which is optimized for scene appearance, density, and motion consistency by querying a Text-to-Video (T2V) diffusion-based model.