I am currently a Senior Application Engineer at Sensory, Inc. Sensory is a profitable, fast-growing private company that holds the largest market share for dedicated speech recognition and voice biometric ICs. It is also the first company to develop a Voice Interface for Bluetooth® headsets. Sensory's mission is to improve the way humans interact with consumer products
I am currently a Senior Application Engineer at Sensory, Inc. Sensory is a profitable, fast-growing private company that holds the largest market share for dedicated speech recognition and voice biometric ICs. It is also the first company to develop a Voice Interface for Bluetooth® headsets. Sensory's mission is to improve the way humans interact with consumer products by designing highly accurate, low-cost, low-current, small-footprint speech technologies that feature voice recognition and synthesis, biometric passwords, MIDI-like music synthesis, text-to-speech, and interactive robotic controls.
Specialties: Multimodal integration technology --- fusing interface inputs from speech, sketch and handwriting in ways that make application interfaces more natural and user friendly.
Senior Application Engineer @ From December 2013 to Present (2 years 1 month) Research Scientist @ As the Principal Investigator for Adapx's work on DARPA's Deep Green program, I led our research group through all three phases of the project: in phase 1 we were judged superior to other groups' efforts who were also attempting to fuse sketch and speech inputs for creating military symbology on GIS maps; in phase 2, having won the formally graded competition in the earlier phase, we were solely responsible for multimodal interface implementation and could advance to covering not just symbol creation but also multimodally defining a full ontology of military tasks; in phase 3, based on growing appreciation for the attractiveness of our multimodal interface for military planning tasks on GIS maps, we were chosen as the prime contractor for the final phase of the program. Currently the resulting planning application, called STP (Sketch-Thru-Plan), is being transitioned to various branches of the US armed forces. From July 2005 to December 2013 (8 years 6 months) Consultant @ Consulted on aspects of speech recognition within the CSLU Toolkit (Center for Spoken Language Understanding, OGI/OHSU). From 2002 to 2002 (less than a year) Intern @ During a four month internship contributed to the design and implementation of a top-to-bottom speech recognition system based on AT&T’s Finite State Toolkit. At the word level the system incorporated aspects of my design ideas for robust semantic parsing. From June 2001 to September 2001 (4 months)
PhD Computer Science, Multimodal Systems (Speech, Gesture, Handwriting recognition and fusion) @ OHSUMaster of Science (M.S.), Computer Science @ OGIAssociate of Science (AS), Computer Software Engineering @ Portland Community College From 1994 to 1996 Bachelor of Arts (BA), American History (United States) @ Reed College From 1981 to 1986 Edward Kaiser is skilled in: Software Development, Software Project Management, Speech Processing, Integration, Artificial Intelligence, Machine Learning, Computer Science, Software Engineering, C++, Visual Studio, Software Design, C#, Human Computer Interaction, Java, Programming
Looking for a different
Get an email address for anyone on LinkedIn with the ContactOut Chrome extension