The MediaPortal 2 player subsystem needs a closer look as it is not the simplest part of MediaPortal 2.
When a media item should be played, there are multiple involved system parts: The plugin for the media item navigation (e.g. the Media or the TV plugin), the PlayerContextManager from the MediaPortal 2 core part, the actual media player from a player plugin and a player presentation screen plus screen model. Each of those components has a special job which will be described here.
I’ll start with the PlayerContextManager - together with its underlaying PlayerManager - and the idea behind them, because this is the central component and it’s very important to understand their function.
The classes PlayerManager and PlayerContextManager are the most important classes in the player subsystem core part. Together, they are responsible for creating, initializing and maintaining player instances for given to-be-played media items. They make sure that at any time, not more that two players are active. And they take care that the two available player slots are filled in the correct order, i.e. that there is never a video player present in the secondary player slot, when only an audio player is in the primary player slot. Only one player is allowed play its audio signal. All those jobs are shared between PlayerManager and PlayerContextManager.
The PlayerManager (PM) service is responsible for the basic work; it knows all players which have been made available by plugins and it knows how to request and instanciate a player item from the plugin manager. It starts the appropriate player when a media item should be played. The PM maintains two player slots, called the primary and the secondary player slot. Each active player takes up one slot. For each active player, the PM creates an instance of class PlayerSlotController which is responsible for tracking the player’s state, for reusing the player and for configuring the audio routing and mute state.
The PM is accessed by plugins very rarely, most of the calls are done at the PlayerContextManager.
The PlayerContextManager (PCM) service presents a higher level interface to the core player system. The PCM wraps each player slot controller into a player context (PC) instance, which presents higher level properties and methods. The PC provides access to a playlist for the underlaying player slot and stores an additional media type attribute which can be Audio or Video. That PC type cannot be changed during the lifetime of the PC and is used to manage the concurrency of player slots. The PC’s type restricts the type of media items which can be added to the PC’s playlist and it restricts the type of player which can run in that PC. When a client opens a PC of a special type (i.e. audio or video), the PCM takes care that, if necessary, conflicting player contexts are closed before the new PC is opened and player for new item is started. If possible, currently active players are reused. The methods which open a new PC can be controlled by parameters to close all already active PCs or to leave them open and play concurrently.
The PC also stores workflow states for two special media presentation screens: The FullscreenContent (FSC) and the CurrentlyPlaying (CP) states. I’ll describe the meaning of those states later.
The player is typically implemented in a separate player plugin which is independent from the media navigation plugin. The player is exchangeable and thus can be replaced (or even extended) by additional players.
The player is responsible for reading, decoding and presenting of media items. In case of an audio player, the player directly produces the audio output which is delivered to Windows. In case of video players, there exists an additional interface which needs to be implemented for the communication with the SkinEngine.
There are multiple types of players, audio, video and picture players, DVD players, TV renderers and some more. Each type of player has its own interface which provides common properties and methods for that type of player. The player interfaces are arranged in an inheritance hierarchy; the topmost interface is the IPlayer interface, which provides the basic player functionality. Aside from this player inheritance interface hierarchy, there are several supporting interfaces which provide additional functionality, like IMediaPlaybackControl which provides properties and methods to request the current play position, to pause, restart and seek the current playback position etc. Another supporting interface is the interface IVolumeControl which can mute a player and set its volume.
Those supporting interfaces can be shared between different player types, e.g. the IVolumeControl interface will be implemented by audio and video players. Depending on the supported interfaces, the system will provide different menu items, dialogs and functionality.
Players are implemented by plugins; there are no player implementations in the core MediaPortal 2 system.
Typically, to play a media item, the user first navigates through some media items which have been presented to him (e.g. songs, movies, pictures, but also TV channels or webstreams). The presented media items come from a directory listing or from a media library query. During this navigation workflow, the user chooses one of the media items to be played.
I’ll call that component, which presents the media navigation workflow to the user, the „media item navigation“ component. In case of audio, video or picture items, this is typically the Media plugin. In case of TV channels, this is the TV plugin.
Media navigation plugins are only one example for a client module. Another possible client could be for example a MediaPortal 2 tutorial plugin which plays several tutorial videos. Or there might be other plugins which want to play videos or audio items.
The PlayerContextManager is shared between all those clients; all clients playing media items are synchronized by the PCM.
Now we have players, a management component for those players (PCM) and a client which can request the playback. The last thing which is missing is a screen which presents the video/audio content. In MediaPortal 2, each PC stores workflow states for two special media presentation screens: The FullscreenContent (FSC) and the CurrentlyPlaying (CP) screens. The FSC screen for the video player is the screen with the fullscreen video content in the background and the onscreen-display in the foreground. The CP state for the video player shows an information screen for the currently playing video item. For the audio player, the FSC state is the visualization screen and the CP state shows an information screen for the current audio item.
Don't mix up the FSC video screen with the video which is shown in the background - those are completely different things. The background video is just a feature which is provided by the skin. Another skin yould show a currently running video in a small preview box, or completely different. The video background screen is part of the skin (maintained by the skin dev) while the video and audio FSC and CP screens are part of the media plugin (for the default players) or part of a third-party player plugin. A player with extended features (compared to the features of our standard players) can only be controlled when the player developer provides a special UI contributor for that player.
Each of those components has its special job and the delimitation of the different responsibilities is quite important. Developers are urged to understand the different responsibilities. I’ll specify later what I mean by that.
When a media navigation model requests the PCM to play a video, it passes a module id to the PCM. That module id is unique among all media navigation models and it is used to track/reference the formerly created PC. It is not safe to simply use the PC’s sequence number (primary, secondary) to gain a reference to it later because the order of the PCs changes (for example due to an exchange of PiP players, for example).
Two very important parameters when requesting a new player context are those which pass the FSC and CP state ids for the new player context. Those states are fix per PC and remain attached to it until the PC gets closed. This is an important observation: The media navigation component, which requests the PC, is responsible for setting its FSC and CP states. That means the media navigation component has a dependency to those workflow states and thus the media navigation plugin either must expose those workflow states itself or, in case it uses them from another plugin, it MUST state an explicit dependency to the plugin which exposes those workflow states in its plugin descriptor.
In case of the Media plugin, the FSC and CP states are exposed by the same plugin, for example.
Each PC stores a dictionary for user-defined (key; value) pairs which can be used by all system components to store arbitrary context data. That could be additional configuration data for the FSC/CP states or something like that.
The PC provides a playlist for its underlaying player slot and automatically advances to the next media item when the current item is finished.
Media playback happens completely independent from the requesting plugin (i.e. media navigation plugin). Even if the user navigates completely out of the media navigation workflow (i.e. for example he navigates into the settings workflow), playback continues. The audio continues playing and the video continues to be shown in the background. Playlist advances also need to be triggered independently. That’s the reason why the core PCM/PC components are responsible for that. That’s also the reason why the PC needs the knowledge about the FSC/CP states: Because the user can navigate to one of those states at any time, independent from his current workflow position and thus, the media navigation model, which started a PC, might not be active any more.
Strictly speaking, there is another component which is involved when talking about the player subsystem: The SkinBase plugin. That plugin is responsible for displaying the menu points for switching into the FSC/CP states for the current player and for opening the PlayerConfigurationDialog. Because those actions are completely independent from type of the actual players and only need to know the information IF a player is active, they are implemented in the SkinBase plugin.
There are three independent concepts to mark active PCs:
The primary player is always the player whose picture is shown in fullscreen mode while the secondary player’s picture is shown in PiP. When accessing the players via their indices, the primary player is always at index 0, the secondary player is always at index 1.
The current player is that player which is changed when one of the player control commands from the remote is used (e.g. play, pause, stop etc.). Furthermore most of the commands in the PlayerConfigurationDialog affect the current player. The current player is the „currently controlled player“.
The audio player is the player which currently plays the audio signal. Independent from the current types of PCs, the audio signal can come from each of the active players. It is also possible to mute it, but removing the mute state will restore the audio signal at the former audio player.
Because players are completely independent from their UI representation screen(s), the communication capabilities between the player presentation screen model and the player is completely specified by the player interface(s). Interface changes/extensions of player interfaces for the sake of more convenience for player presentation screens/models must be carefully considered. If a very general interface, like the IMediaPlaybackControl, is changed, all players implementing that interface also are forced to be adapted to that change. Off course, general interfaces must not contain too special methods/properties/data types.
In MediaPortal 1, player interfaces often simply were extended when foreign components had extended communication requirements. That resulted in messy, big player interfaces which needed to be implemented by all players. Most of the players simply returned null or did nothing in those methods. In MediaPortal 2, such implementation flaws should be avoided.
Player interfaces should be designed in a way that they are general and able to be implemented by all players of the special kind. No pure GUI jobs must be delegated to the player. That means for example that a player should not need to cope with localization.
Supporting interfaces are used to mark players with special capabilities. For example the IMediaPlaybackControl interface can be implemented if a player is capable of controlling the current play position (life streams might not be able to, for example).
Currently, there is not much design guideline for player presentation screens. Typically, a FSC screen for a video player (which also includes pictures, TV, webstream and other players providing a video picture) will show the primary video picture fullscreen at the background and present an OSD menu which is only visible when the user presses the „info“ button or when the mouse is moved. Furthermore, the secondary player must also be shown in a less prominent place. When the mouse is used, play control buttons must be visible to be used with the mouse.
FSC screens for audio players might show an audio vizualization, a picture or something else which sensibly can be played fullscreen.
CP screens should show additional information about the current media item.