Skip Navigation
star technology and research @lemm.ee kenna @lemm.ee
Sharp ePoster
0
star technology and research @lemm.ee kenna @lemm.ee
Efficient Streaming Language Models with Attention Sinks - #AI
0
star technology and research @lemm.ee kenna @lemm.ee
RPi5 - #SBC
0
star technology and research @lemm.ee kenna @lemm.ee
Raspberry Pi 5 Benchmarks - #SBC
0
star technology and research @lemm.ee kenna @lemm.ee
UNITED STATES OF AMERICA v. GOOGLE LLC, 1:20-cv-03010 - CourtListener.com
0
star technology and research @lemm.ee kenna @lemm.ee
Lil'Log - #AI
lilianweng.github.io Lil'Log

Document my learning notes.

interesting posts from safety researcher at OpenAI

0
star technology and research @lemm.ee kenna @lemm.ee
SVG Customization - #AI
intchous.github.io Text-Guided Vector Graphics Customization

Text-Guided Vector Graphics Customization

0
star technology and research @lemm.ee kenna @lemm.ee
Dev Kit AR Quest 3 - #VR
0
star technology and research @lemm.ee kenna @lemm.ee
AudioLDM2: Text-to-Audio Generation with Latent Diffusion Models - Speech Research - #AI

Although audio generation shares commonalities across different types of audio, such as speech, music, and sound effects, designing models for each type requires careful consideration of specific objectives and biases that can significantly differ from those of other types. To bring us closer to a unified perspective of audio generation, this paper proposes a framework that utilizes the same learning method for speech, music, and sound effect generation. Our framework introduces a general representation of audio, called "language of audio" (LOA). Any audio can be translated into LOA based on AudioMAE, a self-supervised pre-trained representation learning model. In the generation process, we translate any modalities into LOA by using a GPT-2 model, and we perform self-supervised audio generation learning with a latent diffusion model conditioned on LOA. The proposed framework naturally brings advantages such as in-context learning abilities and reusable self-supervised pretrained AudioMAE and latent diffusion models. Experiments on the major benchmarks of text-to-audio, text-to-music, and text-to-speech demonstrate new state-of-the-art or competitive performance to previous approaches.

0
star technology and research @lemm.ee kenna @lemm.ee
the Tragedy of Google Search
0
star technology and research @lemm.ee kenna @lemm.ee
ControlNet Spiral - #AI
0
star technology and research @lemm.ee kenna @lemm.ee
RealityKit Overview - #XR
developer.apple.com RealityKit Overview - Augmented Reality - Apple Developer

Use the Reality Composer app and RealityKit to build animations and interactions in iOS and macOS to enrich your 3D content.

RealityKit Overview - Augmented Reality - Apple Developer
0
star technology and research @lemm.ee kenna @lemm.ee
Projects – Hand and Machine - #Fabric
0
star technology and research @lemm.ee kenna @lemm.ee
Tiny Stories #AI
0
star technology and research @lemm.ee kenna @lemm.ee
Flexible, Ultra-thin OLED
0
star technology and research @lemm.ee kenna @lemm.ee
DEFCON 31 - Snoop Unto Them, As They Snoop Unto Us
blog.dataparty.xyz DEFCON 31 - Snoop unto them, as they snoop unto us

The official videos from DEFCON 31 have been posted! Below you can watch our talk “Snoop unto them as they snoop unto you”. The talk, slides, files

DEFCON 31 - Snoop unto them, as they snoop unto us
0
star technology and research @lemm.ee kenna @lemm.ee
Microsoft v FTC court case leak
0
star technology and research @lemm.ee kenna @lemm.ee
PhyMask: Robust Sensing of Brain Activity and Physiological Signals During Sleep with an All-textile Eye Mask - #Sense, #Fabric

Clinical-grade wearable sleep monitoring is a challenging problem since it requires concurrently monitoring brain activity, eye movement, muscle activity, cardio-respiratory features, and gross body movements. This requires multiple sensors to be worn at different locations as well as uncomfortable adhesives and discrete electronic components to be placed on the head. As a result, existing wearables either compromise comfort or compromise accuracy in tracking sleep variables. We propose PhyMask, an all-textile sleep monitoring solution that is practical and comfortable for continuous use and that acquires all signals of interest to sleep solely using comfortable textile sensors placed on the head. We show that PhyMask can be used to accurately measure all the signals required for precise sleep stage tracking and to extract advanced sleep markers such as spindles and K-complexes robustly in the real-world setting. We validate PhyMask against polysomnography (PSG) and show that it significantly outperforms two commercially-available sleep tracking wearables—Fitbit and Oura Ring.

0
star technology and research @lemm.ee kenna @lemm.ee
DisPad: Flexible On-Body Displacement of Fabric Sensors for Robust Joint-Motion Tracking - #Sense, #AI, #Fabric

The last few decades have witnessed an emerging trend of wearable soft sensors; however, there are important signal-processing challenges for soft sensors that still limit their practical deployment. They are error-prone when displaced, resulting in significant deviations from their ideal sensor output. In this work, we propose a novel prototype that integrates an elbow pad with a sparse network of soft sensors. Our prototype is fully bio-compatible, stretchable, and wearable. We develop a learning-based method to predict the elbow orientation angle and achieve an average tracking error of 9.82 degrees for single-user multi-motion experiments. With transfer learning, our method achieves the average tracking errors of 10.98 degrees and 11.81 degrees across different motion types and users, respectively. Our core contributions lie in a solution that realizes robust and stable human joint motion tracking across different device displacements.

0
star technology and research @lemm.ee kenna @lemm.ee
#XR - DeepMix: mobility-aware, lightweight, and hybrid 3D object detection for headsets

Mobile headsets should be capable of understanding 3D physical environments to offer a truly immersive experience for augmented/mixed reality (AR/MR). However, their small form-factor and limited computation resources make it extremely challenging to execute in real-time 3D vision algorithms, which are known to be more compute-intensive than their 2D counterparts. In this paper, we propose DeepMix, a mobility-aware, lightweight, and hybrid 3D object detection framework for improving the user experience of AR/MR on mobile headsets. Motivated by our analysis and evaluation of state-of-the-art 3D object detection models, DeepMix intelligently combines edge-assisted 2D object detection and novel, on-device 3D bounding box estimations that leverage depth data captured by headsets. This leads to low end-to-end latency and significantly boosts detection accuracy in mobile scenarios. A unique feature of DeepMix is that it fully exploits the mobility of headsets to fine-tune detection results and boost detection accuracy. To the best of our knowledge, DeepMix is the first 3D object detection that achieves 30 FPS (i.e., an end-to-end latency much lower than the 100 ms stringent requirement of interactive AR/MR). We implement a prototype of DeepMix on Microsoft HoloLens and evaluate its performance via both extensive controlled experiments and a user study with 30+ participants. DeepMix not only improves detection accuracy by 9.1--37.3% but also reduces end-to-end latency by 2.68--9.15×, compared to the baseline that uses existing 3D object detection models.

0
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)KE
kenna @lemm.ee
Posts 32
Comments 0