Audio menus describing media contents of media players

Abstract

Methods, systems, and computer program products are provided for creating an audio menu describing media content of a media player. Embodiments include retrieving metadata describing the media files managed by the media player; converting at least a portion of the metadata to speech; creating one or more media files for the audio menu; and saving the speech in the audio portion of the one or more the media files for the audio menu.

Claims

1. A computer-implemented method for creating an audio menu describing media content of a media player, the method comprising: retrieving metadata describing the media files managed by the media player, further comprising: retrieving an extensible markup language (‘XML’) metadata file describing the media files managed by the media player; identifying in dependence upon the XML metadata file an organization of the media files managed by the media player; converting at least a portion of the metadata to speech, including converting metadata describing a particular media file managed by media player to speech; creating one or more media files for the audio menu; and saving the speech in the audio portion of the one or more media files for the audio menu, further comprising saving the speech according to the organization of the media files managed by the media player. 2. The method of claim 1 wherein retrieving metadata further comprises: retrieving from each of the media files managed by the media player metadata describing the media file; identifying in dependence upon a file system in which the media files are stored an organization of the media files managed by the media player; and wherein saving the speech in the audio portion of the one or more media files for the audio menu further comprises: saving the speech according to the organization of the media files managed by the media player. 3. The method of claim 2 further comprising prepending speech converted from the metadata of a media file managed by the media player to the audio portion of the media file. 4. The method of claim 1 further comprising creating an audio file organization menu including: identifying an organization of the media files managed by the media player; creating speech describing the organization of the media files managed by the media player; and creating one or more media files; and saving the created speech describing the organization of the media files managed by the media player in the one or more media files. 5. A system for creating an audio menu describing media content of a media player, the system comprising: a computer processor; a computer memory operatively coupled to the computer processor; the computer memory having disposed within it computer program instructions capable of: retrieving metadata describing the media files managed by the media player, further comprising: retrieving an extensible markup language (‘XML’) metadata file describing the media files managed by the media player; identifying in dependence upon the XML metadata file an organization of the media files managed by the media player; converting at least a portion of the metadata to speech including converting metadata describing a particular media file managed by media player to speech; creating one or more media files for the audio menu; and saving the speech in the audio portion of the one or more media files for the audio menu, further comprising saving the speech according to the organization of the media files managed by the media player. 6. The system of claim 5 wherein computer program instructions capable of retrieving metadata further comprise computer program instructions capable of: retrieving from each of the media files managed by the media player metadata describing the media file; identifying in dependence upon a file system in which the media files are stored an organization of the media files managed by the media player; and wherein computer program instructions capable of saving the speech in the audio portion of the one or more media files for the audio menu further comprise computer program instructions capable of: saving the speech according to the organization of the media files managed by the media player. 7. The system of claim 6 wherein the computer memory also has disposed with it computer program instructions capable of prepending speech converted from the metadata of a media file managed by the media player to the audio portion of the media file. 8. The system of claim 5 wherein the computer memory also has disposed with it computer program instructions capable of creating an audio file organization menu including computer program instructions capable of: identifying an organization of the media files managed by the media player; creating speech describing the organization of the media files managed by the media player; and creating one or more media files; and saving the created speech describing the organization of the media files managed by the media player in the one or more media files. 9. A computer program product for creating an audio menu describing media content of a media player, the computer program product embodied on a computer-readable recording medium, the computer program product comprising: computer program instructions for retrieving metadata describing the media files managed by the media player, further comprising: computer program instructions for retrieving an extensible markup language (‘XML’) metadata file describing the media files managed by the media player; computer program instructions for identifying in dependence upon the XML metadata file an organization of the media files managed by the media player, computer program instructions for converting at least a portion of the metadata to speech, including computer program instructions for converting metadata describing a particular media file managed by media player to speech; computer program instructions for creating one or more media files for the audio menu; and computer program instructions for saving the speech in the audio portion of the one or more media files for the audio menu, further comprising computer program instructions for saving the speech according to the organization of the media files managed by the media player. 10. The computer program product of claim 9 wherein computer program instructions for retrieving metadata further comprise: computer program instructions for retrieving from each of the media files managed by the media player metadata describing the media file; computer program instructions for identifying in dependence upon a file system in which the media files are stored an organization of the media files managed by the media player, and wherein computer program instructions for saving the speech in the audio portion of the one or more media files for the audio menu further comprise: computer program instructions for saving the speech according to the organization of the media files managed by the media player. 11. The computer program product of claim 10 further comprising computer program instructions for prepending speech converted from the metadata of a media file managed by the media player to the audio portion of the media file. 12. The computer program product of claim 9 further comprising computer program instructions for creating an audio file organization menu including: computer program instructions for identifying an organization of the media files managed by the media player; computer program instructions for creating speech describing the organization of the media files managed by the media player; and computer program instructions for creating one or more media files; and computer program instructions for saving the created speech describing the organization of the media files managed by the media player in the one or more media files.
BACKGROUND OF THE INVENTION 1. Field of the Invention The field of the invention is data processing, or, more specifically, methods, systems, and products for creating an audio menu describing media content of a media player. 2. Description of Related Art Portable media players are often lightweight making such players user friendly and popular. Many conventional portable media players include display screens for displaying metadata associated with the media files supported by the portable media players in addition to being capable of playing the media files themselves. To read the metadata from the display screen users must either be able to see or be in a position to look at the display screen of the portable media player. Blind users and users who are currently visually occupied cannot use the display screen to read the metadata associated with the media files supported by the portable media player. SUMMARY OF THE INVENTION Methods, systems, and computer program products are provided for creating an audio menu describing media content of a media player. Embodiments include retrieving metadata describing the media files managed by the media player; converting at least a portion of the metadata to speech; creating one or more media files for the audio menu; and saving the speech in the audio portion of the one or more the media files for the audio menu. The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular descriptions of exemplary embodiments of the invention as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts of exemplary embodiments of the invention. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 sets forth a block diagram of an exemplary system for creating an audio menu describing media content of a media player according to the present invention. FIG. 2 sets forth a block diagram of automated computing machinery including a computer useful in creating an audio menu describing media content of a media player according to the present invention. FIG. 3 sets forth a flow chart illustrating an exemplary method for creating an audio menu describing media content of a media player. FIG. 4 sets forth a flow chart illustrating another exemplary method for creating an audio menu describing media content of a media player that includes retrieving a metadata file describing the media files managed by the media player. FIG. 5 sets forth a flow chart illustrating another exemplary method for creating an audio menu describing media content of a media player. FIG. 6 sets forth a flow chart illustrating an exemplary method for creating an audio file organization menu. DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS Exemplary methods, systems, and products for creating audio menus describing media contents of media players are described with reference to the accompanying drawings, beginning with FIG. 1 . FIG. 1 sets forth a block diagram of an exemplary system for creating an audio menu describing media content of a media player according to the present invention. The system of FIG. 1 includes a personal computer ( 106 ) having installed upon it a digital media player application ( 232 ) and an audio menu creation module ( 454 ). A digital media player application ( 234 ) is an application that manages media content in media files such as audio files and video files. Such digital media player applications are typically capable of storing the media files on a portable media player. Examples of digital media player applications include Music Match™, iTunes®, Songbird™ and others as will occur to those of skill in the art. The audio menu creation module ( 232 ) is an application for creating an audio menu describing media content of a media player according to the present invention, including computer program instructions for retrieving metadata describing the media files managed by the media player; converting at least a portion of the metadata to speech; creating one or more media files for the audio menu; and saving the speech in the audio portion of the one or more the media files for the audio menu. The digital media player application ( 232 ) is capable of transferring the one or more media files having the speech of the metadata describing the other media files managed by the media player to a portable media player ( 108 ). A portable media player ( 108 ) is a device capable of rendering media files and other content. Examples of portable media players include the iPod® from Apple and Creative Zen Vision from Creative labs. The portable media player ( 108 ) of FIG. 1 includes a display screen ( 110 ) for rendering video content and visually rendering metadata describing media files stored on the portable media player ( 108 ). The portable media player ( 108 ) of FIG. 1 also includes headphones ( 112 ) for rendering audio content of media files stored on the portable media player. The arrangement of devices making up the exemplary system illustrated in FIG. 1 is for explanation, not for limitation. Data processing systems useful according to various embodiments of the present invention may include additional servers, routers, other devices, and peer-to-peer architectures, not shown in FIG. 1 , as will occur to those of skill in the art. Networks in such data processing systems may support many data communications protocols, including for example TCP (Transmission Control Protocol), IP (Internet Protocol), HTTP (HyperText Transfer Protocol), WAP (Wireless Access Protocol), HDTP (Handheld Device Transport Protocol), and others as will occur to those of skill in the art. Various embodiments of the present invention may be implemented on a variety of hardware platforms in addition to those illustrated in FIG. 1 . Creating an audio menu describing media content of a media player according to the present invention is generally implemented with computers, that is, with automated computing machinery. In the system of FIG. 1 , for example, all the devices are implemented to some extent at least as computers. For further explanation, therefore, FIG. 2 sets forth a block diagram of automated computing machinery comprising a computer useful in creating an audio menu describing media content of a media player according to the present invention. The computer ( 114 ) of FIG. 2 includes at least one computer processor ( 156 ) or ‘CPU’ as well as random access memory ( 168 ) (‘RAM’) which is connected through a system bus ( 160 ) to a processor ( 156 ) and to other components of the computer ( 114 ). Stored in RAM ( 168 ) is a digital media player application ( 234 ). As mentioned above, a digital media player application ( 234 ) is an application that manages media content in media files such as audio files and video files. Such digital media player applications are typically capable of storing the managed media files on a portable media player. Examples of digital media player applications include Music Match™, iTunes®, Songbird™ and others as will occur to those of skill in the art. Also stored in RAM ( 168 ) The audio menu creation module ( 232 ) is an application for creating an audio menu describing media content of a media player according to the present invention, including computer program instructions for retrieving metadata describing the media files managed by the media player; converting at least a portion of the metadata to speech; creating one or more media files for the audio menu; and saving the speech in the audio portion of the one or more the media files for the audio menu. Also stored in RAM ( 168 ) is an operating system ( 154 ). Operating systems useful in computers according to embodiments of the present invention include UNIX ™ , Linux ™ , Microsoft Windows NT ™ , AIX ™ , IBM's i5/OS ™ , and others as will occur to those of skill in the art. The exemplary computer ( 114 ) of FIG. 2 includes non-volatile computer memory ( 166 ) coupled through a system bus ( 160 ) to a processor ( 156 ) and to other components of the computer ( 114 ). Non-volatile computer memory ( 166 ) may be implemented as a hard disk drive ( 170 ), an optical disk drive ( 172 ), an electrically erasable programmable read-only memory space (so-called ‘EEPROM’ or ‘Flash’ memory) ( 174 ), RAM drives (not shown), or as any other kind of computer memory as will occur to those of skill in the art. The exemplary computer ( 114 ) of FIG. 2 includes one or more input/output interface adapters ( 178 ). Input/output interface adapters in computers implement user-oriented input/output through, for example, software drivers and computer hardware for controlling output to display devices ( 180 ) such as computer display screens, as well as user input from user input devices ( 181 ) such as keyboards and mice. The exemplary computer ( 114 ) of FIG. 2 includes a communications adapter ( 167 ) for implementing data communications ( 184 ) with rendering devices ( 202 ). Such data communications may be carried out serially through RS-232 connections, through external buses such as a USB, through data communications networks such as IP networks, and in other ways as will occur to those of skill in the art. Communications adapters implement the hardware level of data communications through which one computer sends data communications to another computer, directly or through a network. Examples of communications adapters useful for creating an audio menu describing media content of a media player include modems for wired dial-up communications, Ethernet (IEEE 802.3) adapters for wired network communications, and 802.11b adapters for wireless network communications and other as will occur to those of skill in the art. For further explanation, FIG. 3 sets forth a flow chart illustrating an exemplary method for creating an audio menu describing media content of a media player. The method of FIG. 3 includes retrieving ( 302 ) metadata ( 304 ) describing the media files managed by the media player as discussed below with reference to FIG. 4 . Retrieving ( 302 ) metadata ( 304 ) describing the media files managed by the media player according to the method of FIG. 3 may be carried out by retrieving a metadata file describing the media files. iTunes®, for example, maintains an eXtensible Markup Language (‘XML’) library file describing the media files managed by iTunes®. Retrieving ( 302 ) metadata ( 304 ) describing the media files managed by the media player alternatively may be carried out by individually retrieving metadata describing each media file from each of the media files managed by the media player themselves as discussed below with reference to FIG. 5 . Some media file formats, such as for example, the MPEG file format, provide a portion of the file for storing metadata. MPEG file formats support, for example, an ID3v2 tag prepended to the audio portion of the file for storing metadata describing the file. The method of FIG. 3 also includes converting ( 306 ) at least a portion of the metadata ( 304 ) to speech ( 308 ). Converting ( 306 ) at least a portion of the metadata ( 304 ) to speech ( 308 ) may be carried out by processing the extracted metadata using a text-to-speech engine in order to produce a speech presentation of the extracted metadata and then recording the speech produced by the text-speech-engine in the audio portion of a media file. Examples of speech engines capable of converting at least a portion of the metadata to speech for recording in the audio portion of a media filed include, for example, IBM's ViaVoice Text-to-Speech, Acapela Multimedia TTS, AT&T Natural Voices™ Text-to-Speech Engine, and Python's pyTTS class. Each of these text-to-speech engines is composed of a front end that takes input in the form of text and outputs a symbolic linguistic representation to a back end that outputs the received symbolic linguistic representation as a speech waveform. Typically, speech synthesis engines operate by using one or more of the following categories of speech synthesis: articulatory synthesis, formant synthesis, and concatenative synthesis. Articulatory synthesis uses computational biomechanical models of speech production, such as models for the glottis and the moving vocal tract. Typically, an articulatory synthesizer is controlled by simulated representations of muscle actions of the human articulators, such as the tongue, the lips, and the glottis. Computational biomechanical models of speech production solve time-dependent, 3-dimensional differential equations to compute the synthetic speech output. Typically, articulatory synthesis has very high computational requirements, and has lower results in terms of natural-sounding fluent speech than the other two methods discussed below. Formant synthesis uses a set of rules for controlling a highly simplified source-filter model that assumes that the glottal source is completely independent from a filter which represents the vocal tract. The filter that represents the vocal tract is determined by control parameters such as formant frequencies and bandwidths. Each formant is associated with a particular resonance, or peak in the filter characteristic, of the vocal tract. The glottal source generates either stylized glottal pulses for periodic sounds and generates noise for aspiration. Formant synthesis generates highly intelligible, but not completely natural sounding speech. However, formant synthesis has a low memory footprint and only moderate computational requirements. Concatenative synthesis uses actual snippets of recorded speech that are cut from recordings and stored in an inventory or voice database, either as waveforms or as encoded speech. These snippets make up the elementary speech segments such as, for example, phones and diphones. Phones are composed of a vowel or a consonant, whereas diphones are composed of phone-to-phone transitions that encompass the second half of one phone plus the first half of the next phone. Some concatenative synthesizers use so-called demi-syllables, in effect applying the diphone method to the time scale of syllables. Concatenative synthesis then strings together, or concatenates, elementary speech segments selected from the voice database, and, after optional decoding, outputs the resulting speech signal. Because concatenative systems use snippets of recorded speech, they have the highest potential for sounding like natural speech, but concatenative systems require large amounts of database storage for the voice database. The method of FIG. 3 also includes creating ( 310 ) one or more media files ( 312 ) for the audio menu and saving ( 314 ) the speech ( 308 ) in the audio portion of the one or more the media files ( 312 ) for the audio menu. Examples of media file formats useful in creating an audio menu describing media content of a media player according to the present invention include MPEG 3 (‘.mp3’) files, MPEG 4 (‘.mp4’) files, Advanced Audio Coding (‘AAC’) compressed files, Advances Streaming Format (‘ASF’) Files, WAV files, and many others as will occur to those of skill in the art. To further aid users of audio menus according to the present invention some embodiments also include prepending speech converted from the metadata of a media file managed by the media player to the audio portion of the media file. Prepending speech converted from the metadata of a media file managed by the media player to the audio portion of the media file provides additional functionality to each media file managed by the media player and advantageously provides a speech description of the media file prior to the content of the media file itself. This speech description of the media file prepended to the audio content allows users to determine whether to play the content from the speech description of the content. As discussed above, retrieving metadata describing the media files managed by the media player may be carried out by retrieving a metadata file describing the media files. For further explanation, therefore, FIG. 4 sets forth a flow chart illustrating another exemplary method for creating an audio menu describing media content of a media player that includes retrieving a metadata file describing the media files managed by the media player. The method of FIG. 4 is similar to the method of FIG. 3 in that the method of FIG. 4 includes retrieving ( 302 ) metadata ( 304 ) describing the media files managed by the media player; converting ( 306 ) at least a portion of the metadata ( 304 ) to speech ( 308 ); creating ( 310 ) one or more media files ( 312 ) for the audio menu; and saving ( 314 ) the speech ( 308 ) in the audio portion of the one or more the media files ( 312 ) for the audio menu. In the method of FIG. 4 , retrieving ( 302 ) metadata includes retrieving ( 402 ) a metadata file ( 404 ) describing the media files managed by the media player. As mentioned above, one example of such a metadata file is an eXtensible Markup Language (‘XML’) library file describing the media files managed by iTunes®. In the method of FIG. 4 , retrieving ( 302 ) metadata also includes identifying ( 406 ) in dependence upon the metadata file ( 404 ) an organization ( 408 ) of the media files managed by the media player. Identifying ( 406 ) in dependence upon the metadata file ( 404 ) an organization ( 408 ) of the media files managed by the media player may include determining a logical structure, such as for example a tree structure, of the organization of the media files, identifying playlists, determining the organization of media files by artist or genre, or any other organization of the media files as will occur to those of skill in the art. Identifying ( 406 ) in dependence upon the metadata file ( 404 ) an organization ( 408 ) of the media files managed by the media player may be carried out by parsing the markup of an metadata file such as, for example, the XML library file describing the media files managed by iTunes® to determine a logical structure of the organization of the media files, to identify playlists, to determine any organization of media files by artist or genre, or any other organization of the media files as will occur to those of skill in the art In the method of FIG. 4 , saving ( 314 ) the speech ( 308 ) in the audio portion of the one or more the media files ( 312 ) for the audio menu also includes saving ( 410 ) the speech ( 308 ) according to the organization ( 408 ) of the media files ( 502 ) managed by the media player. Saving ( 410 ) the speech ( 308 ) according to the organization ( 408 ) of the media files ( 502 ) managed by the media player according to the method of FIG. 4 may be carried out by saving the speech in a logical sequence corresponding with any logical structure of the organization of the media files, identified playlists, organization of media files by artist or genre, or any other organization of the media files as will occur to those of skill in the art. As discussed above, retrieving metadata describing the media files managed by a media player may also include individually retrieving metadata describing each media file from each of the media files managed by the media player themselves. For further explanation, therefore, FIG. 5 sets forth a flow chart illustrating another exemplary method for creating an audio menu describing media content of a media player. The method of FIG. 5 is similar to the methods of FIG. 3 and FIG. 4 in that the method of FIG. 5 also includes retrieving ( 302 ) metadata ( 304 ) describing the media files managed by the media player; converting ( 306 ) at least a portion of the metadata ( 304 ) to speech ( 308 ); creating ( 310 ) one or more media files ( 312 ) for the audio menu; and saving ( 314 ) the speech ( 308 ) in the audio portion of the one or more the media files ( 312 ) for the audio menu. In the method of FIG. 5 , however, retrieving ( 302 ) metadata includes retrieving ( 506 ) from each of the media files ( 502 ) managed by the media player metadata ( 508 ) describing the media file ( 502 ). As described above, some media file formats, such as for example, the MPEG file format, provide a portion of the file for storing metadata. MPEG file formats support, for example, an ID3v2 tag prepended to the audio portion of the file for storing metadata describing the file. Retrieving ( 506 ) from each of the media files ( 502 ) managed by the media player metadata ( 508 ) describing the media file ( 502 ), for example, therefore may be carried out by retrieving metadata from an ID3v2 tag or other header or container for metadata of each of the media files managed by the media player. Creating an audio menu describing media content of a media player according to the method of FIG. 5 also includes identifying ( 510 ) in dependence upon a file system ( 504 ) in which the media files ( 502 ) are stored an organization ( 512 ) of the media files ( 502 ) managed by the media player. Identifying ( 510 ) in dependence upon a file system ( 504 ) in which the media files ( 502 ) are stored an organization ( 512 ) of the media files ( 502 ) managed by the media player may be carried out by identifying in dependence upon the logical tree structure of the file system ( 504 ) an organization of the media files representing that logical structure of the files system. Such an organization may provide for playlists, organization of media files by artist or genre, or other organization by logical structure in a file system. In the method of FIG. 5 , saving ( 314 ) the speech ( 308 ) in the audio portion of the one or more the media files ( 312 ) for the audio menu also includes saving ( 514 ) the speech ( 308 ) according to the organization ( 512 ) of the media files ( 502 ) managed by the media player. Saving ( 514 ) the speech ( 308 ) according to the organization ( 512 ) of the media files ( 502 ) managed by the media player according to the method of FIG. 5 may be carried out by saving the speech in a logical sequence corresponding with the identified logical structure of the file system of the media files. As an aid to users of audio menus according to the present invention, some embodiments of the present invention also include providing not only a description of the media files managed by the media player, but also a description of the organization of those media files such that a user may be informed of the organization of the media files empowering the user to navigate the media files using the audio menu. For further explanation, FIG. 6 sets forth a flow chart illustrating an exemplary method for creating an audio file organization menu including identifying ( 602 ) an organization ( 604 ) of the media files managed by the media player and creating ( 606 ) speech ( 608 ) describing the organization ( 604 ) of the media files managed by the media player. Identifying ( 602 ) an organization of the media files may be carried out by in dependence upon a metadata file as described above with reference to FIG. 4 or in dependence upon the logical organization of the media files in a file system as described above with reference to FIG. 5 or in other ways as will occur to those of skill in the art. Creating ( 606 ) speech ( 608 ) describing the organization ( 604 ) of the media files managed by the media player may be carried out a speech synthesis engine to create speech describing the identified organization ( 604 ) as discussed above. The method of FIG. 6 also includes creating ( 610 ) one or more media files ( 312 ) and saving ( 614 ) the created speech ( 608 ) describing the organization ( 604 ) of the media files managed by the media player in the one or more media files ( 312 ). Examples of media files useful in creating an audio menu describing media content of a media player according to the present invention include MPEG 3 (‘.mp3’) files, MPEG 4 (‘.mp4’) files, Advanced Audio Coding (‘AAC’) compressed files, Advances Streaming Format (‘ASF’) Files, WAV files, and many others as will occur to those of skill in the art. Exemplary embodiments of the present invention are described largely in the context of a fully functional computer system for creating an audio menu describing media content of a media player. Readers of skill in the art will recognize, however, that the present invention also may be embodied in a computer program product disposed on signal bearing media for use with any suitable data processing system. Such signal bearing media may be transmission media or recordable media for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of recordable media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Examples of transmission media include telephone networks for voice communications and digital data communications networks such as, for example, Ethernets™ and networks that communicate with the Internet Protocol and the World Wide Web. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps of the method of the invention as embodied in a program product. Persons skilled in the art will recognize immediately that, although some of the exemplary embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present invention. It will be understood from the foregoing description that modifications and changes may be made in various embodiments of the present invention without departing from its true spirit. The descriptions in this specification are for purposes of illustration only and are not to be construed in a limiting sense. The scope of the present invention is limited only by the language of the following claims.

Description

Topics

Download Full PDF Version (Non-Commercial Use)

Patent Citations (114)

    Publication numberPublication dateAssigneeTitle
    US-2006184679-A1August 17, 2006Izdepski Erich J, Choksi Ojas TApparatus and method for subscribing to a web logging service via a dispatch communication system
    US-2006052089-A1March 09, 2006Varun Khurana, Sunil GoyalMethod and Apparatus for Subscribing and Receiving Personalized Updates in a Format Customized for Handheld Mobile Communication Devices
    US-2007124458-A1May 31, 2007Cisco Technology, Inc.Method and system for event notification on network nodes
    US-2005076365-A1April 07, 2005Samsung Electronics Co., Ltd.Method and system for recommending content
    US-2007061266-A1March 15, 2007Moore James F, Labovitch Bela ASecurity systems and methods for use with structured and unstructured data
    US-2007192683-A1August 16, 2007Bodin William K, David Jaramillo, Redman Jerry W, Thorson Derral CSynthesizing the content of disparate data types
    US-2007130589-A1June 07, 2007Virtual Reach Systems, Inc.Managing content to constrained devices
    US-2005108521-A1May 19, 2005Silhavy James W., Dirk VoetMulti-platform single sign-on database driver
    US-2004068552-A1April 08, 2004David Kotz, Daniela Rus, David Marmaros, Artz John C.Methods and apparatus for personalized content presentation
    US-2007112844-A1May 17, 2007Tribble Guy L, Yan Arrouye, Dominic GiampaoloMethod and apparatus for processing metadata
    US-2006190616-A1August 24, 2006John Mayerhofer, Peter LeeSystem and method for aggregating, delivering and sharing audio content
    US-2008052415-A1February 28, 2008Marcus Kellerman, Jeyhan Karaoguz, Bennett James DMedia processing system supporting different media formats via server-based transcoding
    US-2004254851-A1December 16, 2004Kabushiki Kaisha ToshibaElectronic merchandise distribution apparatus, electronic merchandise receiving terminal, and electronic merchandise distribution method
    US-2007214485-A1September 13, 2007Bodin William K, David Jaramillo, Redman Jerry W, Thorson Derral CPodcasting content associated with a user account
    US-2004201609-A1October 14, 2004Pere ObradorSystems and methods of authoring a multimedia file
    US-2007191008-A1August 16, 2007Zermatt Systems, Inc.Local transmission for content sharing
    US-2007253699-A1November 01, 2007Jonathan Yen, Peng Wu, Daniel TretterUsing camera metadata to classify images into scene type classes
    US-2005045373-A1March 03, 2005Joseph BornPortable media device with audio prompt menu
    US-2007192327-A1August 16, 2007Bodin William K, David Jaramillo, Redman Jerry W, Thorson Derral CAggregating content of disparate data types from disparate data sources for single point access
    US-6993476-B1January 31, 2006International Business Machines CorporationSystem and method for incorporating semantic characteristics into the format-driven syntactic document transcoding framework
    US-2007277088-A1November 29, 2007Bodin William K, David Jaramillo, Redman Jesse W, Thorson Derral CEnhancing an existing web page
    US-7130850-B2October 31, 2006Microsoft CorporationRating and controlling access to emails
    US-2008034278-A1February 07, 2008Ming-Chih Tsou, Ke-Ming LeeIntegrated interactive multimedia playing system
    US-2007118426-A1May 24, 2007Barnes Jr Melvin LPortable Communications Device and Method
    US-2004088349-A1May 06, 2004Andre Beck, Markus HofmannMethod and apparatus for providing anonymity to end-users in web transactions
    US-2006075224-A1April 06, 2006David TaoSystem for activating multiple applications for concurrent operation
    US-6266649-B1July 24, 2001Amazon.Com, Inc.Collaborative recommendations using item-to-item similarity mappings
    US-2008082576-A1April 03, 2008Bodin William K, David Jaramillo, Redman Jerry W, Thorson Derral CAudio Menus Describing Media Contents of Media Players
    US-6032260-AFebruary 29, 2000Ncr CorporationMethod for issuing a new authenticated electronic ticket based on an expired authenticated ticket and distributed server architecture for using same
    US-6311194-B1October 30, 2001Taalee, Inc.System and method for creating a semantic web and its applications in browsing, searching, profiling, personalization and advertising
    US-2004003394-A1January 01, 2004Arun RamaswamySystem for automatically matching video with ratings information
    US-2006050794-A1March 09, 2006Jek-Thoon Tan, Shen Sheng MMethod and apparatus for delivering programme-associated data to generate relevant visual displays for audio contents
    US-2007208687-A1September 06, 2007O'conor William C, Bradley Nathan TSystem and Method for Audible Web Site Navigation
    US-2004041835-A1March 04, 2004Qiu-Jiang LuNovel web site player and recorder
    US-2006155698-A1July 13, 2006Vayssiere Julien JSystem and method for accessing RSS feeds
    US-2007192684-A1August 16, 2007Bodin William K, David Jaramillo, Redman Jerry W, Thorson Derral CConsolidated content management
    US-2008082635-A1April 03, 2008Bodin William K, David Jaramillo, Redman Jesse W, Thorson Derral CAsynchronous Communications Using Messages Recorded On Handheld Devices
    US-2008275893-A1November 06, 2008International Business Machines CorporationAggregating Content Of Disparate Data Types From Disparate Data Sources For Single Point Access
    US-2006020662-A1January 26, 2006Emergent Music LlcEnabling recommendations and community by massively-distributed nearest-neighbor searching
    US-2007091206-A1April 26, 2007Bloebaum L SMethods, systems and computer program products for accessing downloadable content associated with received broadcast content
    US-2002013708-A1January 31, 2002Andrew Walker, Lamberg Samu P., Walker Simon R., Simelius Kim K.Speech synthesis
    US-7039643-B2May 02, 2006Adobe Systems IncorporatedSystem, method and apparatus for converting and integrating media files
    US-2006233327-A1October 19, 2006Bellsouth Intellectual Property CorporationSaving and forwarding customized messages
    US-2006140360-A1June 29, 2006Crago William B, Clark David W, Johnston David EMethods and systems for rendering voice mail messages amenable to electronic processing by mailbox owners
    US-7062437-B2June 13, 2006International Business Machines CorporationAudio renderings for expressing non-audio nuances
    US-6178511-B1January 23, 2001International Business Machines CorporationCoordinating user target logons in a single sign-on (SSO) environment
    US-2006159109-A1July 20, 2006Sonic SolutionsMethods and systems for use in network management of content
    US-2006288011-A1December 21, 2006Microsoft CorporationFinding and consuming web subscriptions in a web browser
    US-5819220-AOctober 06, 1998Hewlett-Packard CompanyWeb triggered word set boosting for speech interfaces to the world wide web
    US-2007214149-A1September 13, 2007International Business Machines CorporationAssociating user selected content management directives with user selected ratings
    US-2006265503-A1November 23, 2006Apple Computer, Inc.Techniques and systems for supporting podcasting
    US-7356470-B2April 08, 2008Adam Roth, O'sullivan Geoffrey, Dunn Barclay AText-to-speech and image generation of multimedia attachments to e-mail
    US-6944591-B1September 13, 2005International Business Machines CorporationAudio support system for controlling an e-mail system in a remote computer
    US-2007124802-A1May 31, 2007Hereuare Communications Inc.System and Method for Distributed Network Authentication and Access Control
    US-2007083540-A1April 12, 2007Witness Systems, Inc.Providing Access to Captured Data Using a Multimedia Player
    US-2006008258-A1January 12, 2006Pioneer Corporation, Pioneer System Technologies Corporation, Tohoku Pioneer CorporationDevice and method for reproducing compressed information
    US-2002095292-A1July 18, 2002Mittal Parul A., Dubey Pradeep KumarPersonalized system for providing improved understandability of received speech
    US-2002054090-A1May 09, 2002Silva Juliana Freire, Bharat Kumar, Lieuwen Daniel FrancisMethod and apparatus for creating and providing personalized access to web content and services from terminals having diverse capabilities
    US-2008161948-A1July 03, 2008Bodin William K, David Jaramillo, Redman Jesse W, Thorson Derral CSupplementing audio recorded in a media file
    US-2002178007-A1November 28, 2002Benjamin SlotznickMethod of displaying web pages to enable user access to text information that the user has difficulty reading
    US-2005071780-A1March 31, 2005Apple Computer, Inc.Graphical user interface for browsing, searching and presenting classical works
    US-2007220024-A1September 20, 2007Daniel Putterman, Brad DietrichMethods and apparatus for integrating disparate media formats in a networked media system
    US-2008162131-A1July 03, 2008Bodin William K, David Jaramillo, Redman Jesse W, Thorson Derral CBlogcasting using speech recorded on a handheld recording device
    US-2006136449-A1June 22, 2006Microsoft CorporationAggregate data view
    US-2006224739-A1October 05, 2006Microsoft CorporationStorage aggregator
    US-2007214147-A1September 13, 2007Bodin William K, David Jaramillo, Redman Jerry W, Thorson Derral CInforming a user of a content management directive associated with a rating
    US-2003055868-A1March 20, 2003International Business Machines CorporationBuilding distributed software services as aggregations of other services
    US-2007174326-A1July 26, 2007Microsoft CorporationApplication of metadata to digital media
    US-2007100836-A1May 03, 2007Yahoo! Inc.User interface for providing third party content as an RSS feed
    US-2002194286-A1December 19, 2002Kenichiro Matsuura, Hiroshi Satomi, Satoshi Igeta, Atsushi Inoue, Kosuke ItoE-mail service apparatus, system, and method
    US-6976082-B1December 13, 2005At&T Corp.System and method for receiving multi-media messages
    US-2001054074-A1December 20, 2001Kiyoko HayashiElectronic mail system and device
    US-2002062393-A1May 23, 2002Dana Borger, Steve Cox, Tom Gordon, David Spitz, Matthew Squire, Jay ThrashSystems, methods and computer program products for integrating advertising within web content
    US-6240391-B1May 29, 2001Lucent Technologies Inc.Method and apparatus for assembling and presenting structured voicemail messages
    US-2003033331-A1February 13, 2003Raffaele Sena, Monson Nathaniel D., Kevin Lynch, Keith KitaniSystem, method and apparatus for converting and integrating media files
    US-2006173985-A1August 03, 2006Moore James FEnhanced syndication
    US-2007214148-A1September 13, 2007Bodin William K, David Jaramillo, Redman Jerry W, Thorson Derral CInvoking content management directives
    US-2006095848-A1May 04, 2006Apple Computer, Inc.Audio user interface for computing devices
    US-2003115064-A1June 19, 2003International Business Machines CorporatonEmploying speech recognition and capturing customer speech to improve customer service
    US-2003115056-A1June 19, 2003International Business Machines CorporationEmploying speech recognition and key words to improve customer service
    US-2003126293-A1July 03, 2003Robert BusheyDynamic user interface reformat engine
    US-2003103606-A1June 05, 2003Rhie Kyung H., Kwan Richard J., Olsen Lee E., Hahn John S.Method and apparatus for telephonically accessing and navigating the internet
    US-2007276837-A1November 29, 2007Bodin William K, David Jaramillo, Redman Jesse W, Thorson Derral CContent subscription
    US-2002032564-A1March 14, 2002Farzad Ehsani, Knodt Eva M., Master Demitrios L.Phrase-based dialogue modeling with particular application to creating a recognition grammar for a voice-controlled user interface
    US-2002032776-A1March 14, 2002Yamaha CorporationContents rating method
    US-2001027396-A1October 04, 2001Tatsuhiro SatoText information read-out device and music/voice reproduction device incorporating the same
    US-7454346-B1November 18, 2008Cisco Technology, Inc.Apparatus and methods for converting textual information to audio-based output
    US-2003167234-A1September 04, 2003Lightsurf Technologies, Inc.System providing methods for dynamic customization and personalization of user interface
    US-2007277233-A1November 29, 2007Bodin William K, David Jaramillo, Redman Jesse W, Thorson Derral CToken-based content subscription
    US-6771743-B1August 03, 2004International Business Machines CorporationVoice processing system, method and computer program product having common source for internet world wide web pages and voice applications
    US-7171411-B1January 30, 2007Oracle International CorporationMethod and system for implementing shared schemas for users in a distributed computing system
    US-2007027958-A1February 01, 2007Bellsouth Intellectual Property CorporationPodcasting having inserted content distinct from the podcast content
    US-2002198720-A1December 26, 2002Hironobu Takagi, Chieko AsakawaSystem and method for information access
    US-2003028380-A1February 06, 2003Freeland Warwick Peter, Dixon Ian Edward, Brien Glenn CharlesSpeech system
    US-2007192674-A1August 16, 2007Bodin William K, David Jaramillo, Redman Jerry W, Thorson Derral CPublishing content through RSS feeds
    US-2002062216-A1May 23, 2002International Business Machines CorporationMethod and system for gathering information by voice input
    US-2003158737-A1August 21, 2003Csicsatka Tibor GeorgeMethod and apparatus for incorporating additional audio information into audio data file identifying information
    US-2003160770-A1August 28, 2003Koninklijke Philips Electronics N.V.Method and apparatus for an adaptive audio-video program recommendation system
    US-2007213857-A1September 13, 2007Bodin William K, David Jaramillo, Redman Jerry W, Thorson Derral CRSS content administration for rendering RSS content on a digital audio player
    US-2003172066-A1September 11, 2003International Business Machines CorporationSystem and method for detecting duplicate and similar documents
    US-2003110272-A1June 12, 2003Du Castel Bertrand, Montgomery Michael A., Irwin PfisterSystem and method for filtering content
    US-2004199375-A1October 07, 2004Farzad Ehsani, Knodt Eva M., Master Demitrios L.Phrase-based dialogue modeling with particular application to creating a recognition grammar for a voice-controlled user interface
    US-2001047349-A1November 29, 2001Intertainer, Inc.Dynamic digital asset management
    US-2001049725-A1December 06, 2001Nec CorporationE-mail processing system, processing method and processing device
    US-2005232242-A1October 20, 2005Jeyhan Karaoguz, Marc Abrams, Nambirajan SeshadriRegistering access device multimedia content via a broadband access gateway
    US-2006123082-A1June 08, 2006Digate Charles J, Herot Christopher F, Tonytip Ketudat, Kopikis Alexis M, Harris Carol A, Waks Mark J, Rajeev TipnisSystem and method of initiating an on-line meeting or teleconference via a web page link or a third party application
    US-2005015254-A1January 20, 2005Apple Computer, Inc.Voice menu system
    US-2007213986-A1September 13, 2007Bodin William K, David Jaramillo, Redman Jerry W, Thorson Derral CEmail administration for rendering email on a digital audio player
    US-2007073728-A1March 29, 2007Realnetworks, Inc.System and method for automatically managing media content
    US-2006048212-A1March 02, 2006Nippon Telegraph And Telephone CorporationAuthentication system based on address, device thereof, and program
    US-2007276866-A1November 29, 2007Bodin William K, David Jaramillo, Redman Jesse W, Thorson Derral CProviding disparate content as a playlist of media files
    US-2003110297-A1June 12, 2003Tabatabai Ali J., Toby Walker, Visharam Mohammed Z.Transforming multimedia data for delivery to multiple heterogeneous devices
    US-2007276865-A1November 29, 2007Bodin William K, David Jaramillo, Redman Jesse W, Thorson Derral CAdministering incompatible content for rendering on a display screen of a portable media player
    US-2002152210-A1October 17, 2002Venetica CorporationSystem for providing access to multiple disparate content repositories with a single consistent interface

NO-Patent Citations (23)

    Title
    Adapting Multimedia Internet Content for Universal Access, Rakesh Mohan, John R. Smith, Chung-Sheng Li, IEEE Transactions on Multimedia vol. 1, No. 1, p. 104-p. 144.
    Final Office Action Dated Jul. 21, 2009 in U.S. Appl. No. 11/420,018.
    Final Office Action Dated Jul. 22, 2009 in U.S. Appl. No. 11/536,733.
    Managing multimedia content and delivering services across multiple client platforms using XML, London Communications Symposium, xx, xx, Sep. 10, 2002, pp. 1-7.
    Office Action Dated Feb. 13, 2006 in U.S. Appl. No. 11/352,679.
    Office Action Dated Feb. 13, 2006 in U.S. Appl. No. 11/352,760.
    Office Action Dated Feb. 13, 2006 in U.S. Appl. No. 11/352,824.
    Office Action Dated Jan. 3, 2007 in U.S. Appl. No. 11/619,253.
    Office Action Dated Jul. 17, 2009 in U.S. Appl. No. 11/536,781.
    Office Action Dated Jul. 23, 2009 in U.S. Appl. No. 11/420,014.
    Office Action Dated Jul. 8, 2009 in U.S. Appl. No. 11/372,317.
    Office Action Dated Jul. 9, 2009 in U.S. Appl. No. 11/420,017.
    Office Action Dated Jun. 23, 2009 in U.S. Appl. No. 11/352,680.
    Office Action Dated Mar. 9, 2006 in U.S. Appl. No. 11/372,318.
    Office Action Dated Mar. 9, 2006 in U.S. Appl. No. 11/372,323.
    Office Action Dated Mar. 9, 2006 in U.S. Appl. No. 11/372,325.
    Office Action Dated Mar. 9, 2006 in U.S. Appl. No. 11/372,329.
    Office Action Dated May 24, 2006 in U.S. Appl. No. 11/420,015.
    Office Action Dated May 24, 2006 in U.S. Appl. No. 11/420,016.
    Office Action Dated May 24, 2006 in U.S. Appl. No. 11/420,018.
    Office Action Dated Sep. 29, 2006 in U.S. Appl. No. 11/536,733.
    PCT Search Report and Written Opinion International Application PCT/EP2007/050594.
    Text to Speech MP3 with Natural Voices 1.71, Published Oct. 5, 2004.

Cited By (62)

    Publication numberPublication dateAssigneeTitle
    US-9300784-B2March 29, 2016Apple Inc.System and method for emergency calls initiated by voice command
    US-9430463-B2August 30, 2016Apple Inc.Exemplar-based natural language processing
    US-9785630-B2October 10, 2017Apple Inc.Text prediction using combined word N-gram and unigram language models
    US-9668024-B2May 30, 2017Apple Inc.Intelligent automated assistant for TV user interactions
    US-9697822-B1July 04, 2017Apple Inc.System and method for updating an adaptive speech recognition model
    US-9858925-B2January 02, 2018Apple Inc.Using context information to facilitate processing of commands in a virtual assistant
    US-9318108-B2April 19, 2016Apple Inc.Intelligent automated assistant
    US-9697820-B2July 04, 2017Apple Inc.Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
    US-9626955-B2April 18, 2017Apple Inc.Intelligent text-to-speech conversion
    US-8930191-B2January 06, 2015Apple Inc.Paraphrasing of user requests and results by automated digital assistant
    US-9842105-B2December 12, 2017Apple Inc.Parsimonious continuous-space phrase representations for natural language processing
    US-9495129-B2November 15, 2016Apple Inc.Device, method, and user interface for voice-activated navigation and browsing of a document
    US-2009306985-A1December 10, 2009At&T LabsSystem and method for synthetically generated speech describing media content
    US-8831948-B2September 09, 2014At&T Intellectual Property I, L.P.System and method for synthetically generated speech describing media content
    US-2010082344-A1April 01, 2010Apple, Inc.Systems and methods for selective rate of speech and speech preferences for text to speech synthesis
    US-8352268-B2January 08, 2013Apple Inc.Systems and methods for selective rate of speech and speech preferences for text to speech synthesis
    US-9483461-B2November 01, 2016Apple Inc.Handling speech synthesis of content for multiple languages
    US-9502031-B2November 22, 2016Apple Inc.Method for supporting dynamic grammars in WFST-based ASR
    US-9620105-B2April 11, 2017Apple Inc.Analyzing audio input for efficient speech and music recognition
    US-9330720-B2May 03, 2016Apple Inc.Methods and apparatus for altering audio output signals
    US-8942986-B2January 27, 2015Apple Inc.Determining user intent based on ontologies of domains
    US-9875735-B2January 23, 2018At&T Intellectual Property I, L.P.System and method for synthetically generated speech describing media content
    US-9842101-B2December 12, 2017Apple Inc.Predictive conversion of language input
    US-9620104-B2April 11, 2017Apple Inc.System and method for user-specified pronunciation of words for speech synthesis and recognition
    US-9548050-B2January 17, 2017Apple Inc.Intelligent automated assistant
    US-8892446-B2November 18, 2014Apple Inc.Service orchestration for intelligent automated assistant
    US-9633004-B2April 25, 2017Apple Inc.Better resolution when referencing to concepts
    US-9558735-B2January 31, 2017At&T Intellectual Property I, L.P.System and method for synthetically generated speech describing media content
    US-9606986-B2March 28, 2017Apple Inc.Integrated word N-gram and class M-gram language models
    US-9646609-B2May 09, 2017Apple Inc.Caching apparatus for serving phonetic pronunciations
    US-9886432-B2February 06, 2018Apple Inc.Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
    US-9865248-B2January 09, 2018Apple Inc.Intelligent text-to-speech conversion
    US-9711141-B2July 18, 2017Apple Inc.Disambiguating heteronyms in speech synthesis
    US-9633674-B2April 25, 2017Apple Inc.System and method for detecting errors in interactions with a voice-based digital assistant
    US-9734193-B2August 15, 2017Apple Inc.Determining domain salience ranking from ambiguous words in natural speech
    US-9798393-B2October 24, 2017Apple Inc.Text correction processing
    US-8903716-B2December 02, 2014Apple Inc.Personalized vocabulary for digital assistant
    US-9117447-B2August 25, 2015Apple Inc.Using event alert text as input to an automated assistant
    US-9262612-B2February 16, 2016Apple Inc.Device access using voice authentication
    US-9668121-B2May 30, 2017Apple Inc.Social reminders
    US-9886953-B2February 06, 2018Apple Inc.Virtual assistant activation
    US-9721566-B2August 01, 2017Apple Inc.Competing devices responding to voice triggers
    US-9760559-B2September 12, 2017Apple Inc.Predictive text input
    US-9633660-B2April 25, 2017Apple Inc.User profiling for voice input processing
    US-9459845-B2October 04, 2016Piksel, Inc.Systems and methods for realtime creation and modification of a dynamically responsive media player
    US-9324317-B2April 26, 2016At&T Intellectual Property I, L.P.System and method for synthetically generated speech describing media content
    US-2014245277-A1August 28, 2014Piksel Americas, Inc.Systems and methods for realtime creation and modification of a dynamic media player and disabled user compliant video player
    US-9818400-B2November 14, 2017Apple Inc.Method and apparatus for discovering trending terms in speech requests
    US-9338493-B2May 10, 2016Apple Inc.Intelligent automated assistant for TV user interactions
    US-2014278404-A1September 18, 2014Parlant Technology, Inc.Audio merge tags
    US-9715875-B2July 25, 2017Apple Inc.Reducing the need for manual start/end-pointing and trigger phrases
    US-9576574-B2February 21, 2017Apple Inc.Context-sensitive handling of interruptions by intelligent digital assistant
    US-9152392-B2October 06, 2015Piksel, Inc.Systems and methods for realtime creation and modification of a dynamic media player and disabled user compliant video player
    US-8380507-B2February 19, 2013Apple Inc.Systems and methods for determining the language to use for speech generated by a text to speech engine
    US-8712776-B2April 29, 2014Apple Inc.Systems and methods for selective text to speech synthesis
    US-8751238-B2June 10, 2014Apple Inc.Systems and methods for determining the language to use for speech generated by a text to speech engine
    US-9535906-B2January 03, 2017Apple Inc.Mobile device having human language translation capability with positional feedback
    US-9646614-B2May 09, 2017Apple Inc.Fast, language-independent method for user authentication by voice
    US-9368114-B2June 14, 2016Apple Inc.Context-sensitive handling of interruptions
    US-9582608-B2February 28, 2017Apple Inc.Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
    US-9645796-B2May 09, 2017Piksel, Inc.Systems and methods for realtime creation and modification of a dynamically responsive media player
    US-9865280-B2January 09, 2018Apple Inc.Structured dictation using intelligent automated assistants