Design Issues

Coordinator
Dec 28, 2009 at 5:45 PM

Please reply to this message with questions about designing or implementing units in the HVP

Dec 29, 2009 at 3:48 AM
Edited Dec 29, 2009 at 3:48 AM

Hi.

I currently have the RIA service task assigned to me. At first I thought it would be blocked until there is a database or at least a data model. However, the more I think of it, the more I’m convinced that some preliminary work can be done. I could use an XML file as a mock data store for now (or maybe even a mock SQL Express DB file).

Thinking about what is needed on the client side (and also after cheating and looking at the comments on the database creation task), I can find the following tidbits as essential:

  • ID
  • Friendly video title
  • URL to the video
  • TOC (possibly empty)
  • List of markers (possibly empty)

Does that sound reasonable? The first three items would be part of the video browsing stage, and the last two would be requested then a video is selected for playback.

I was also wondering how will the initial video browsing happen? Normally in a web page, you have just links to the videos, but since this context adds metadata to them, I assume the player will have some sort of top-level browsing of the available videos. Is that correct?

 I’d love to hear ideas on the back-end part of the player.


David Mora

Coordinator
Dec 30, 2009 at 12:55 AM
Edited Dec 30, 2009 at 1:34 AM

Actually, your timing is perfect.  

I'll be creating a blog post this week  I've posted an update on the design  that will very much affect how the DB is used and what is in the various configuration files. You may want to take a look at that before getting started.

Also, as you may know, WL (a startup company)  has committed to providing 100 hours/week for two months to this project, and their SQL person would very much like to work on the RIA Services part. Is that something you would be willing to collaborate with him on?

Thanks!

[updated 21:35 GMT-5]

 

Dec 30, 2009 at 5:13 PM

I read the design update about the new readers and frames approach. From what I see, my guess is that I need to learn more about RIA in Silverligth. I come from the Windows/WCF world where RIA services/DTOs are coded by hand. There, it wouldn't matter much if the data comes from a SQL database or an XML file. However, the impression I've got so far, is that RIA in Silverligth may depend more on a SQL database. At any rate, the doubt itself is a sign that I need to look deeper into this.

With that in mind, and considering that a person with a lot more time to dedicate to this would like to tackle the RIA part, maybe it would be better if we swap roles and give the task to that developer while I can be a helper (or code monkey) on what he/she may need.

I will just as happily pick something else to give my main attention to, for example the MEH encapsulation or Closed-Captions part.

Let me know what you think.

Coordinator
Dec 30, 2009 at 6:09 PM

Let's let this person get started on the RIA Services part, and if you are comfortable looking at MEF and EventAggregator that would be great. (Please see my response to your message in the other discussion).

 

Thanks again.

 

-j

Dec 30, 2009 at 6:37 PM

No problem. I took myself off the RIA task so that it's free. (Though I couldn't put is back in the Ready for Dev lane.)

MEF it's fine with me as well.

David

 

Jan 7, 2010 at 7:12 PM

Well, if a picture is worth a thousand words, then a video must be worth a thousand thoughts. Up until now, the whole concept of the HVP was still a bit fuzzy, but the video helped me sharpen it to a much crisper view. What’s more, I think that now I have a better idea of how to fit the Closed Captions mechanism to the project, so here is what I think:

At first, I looked at closed-captions only at their face-value, that is, a way to make the contents of the video accessible to persons challenged (either inherently or environmentally) to understand the audio. It could also be useful to those who need or prefer a rolling transcript of sorts of the dialog in a video.

That perspective made me look at the captions as more of a function of the video player component itself. But now I think that such is a very myopic view and severely restricts what could be done with the captioning mechanism.

The way I envision the captions now is more like a means of displaying not only an alternate view of the data (a text version of the audio), but also meta- or ancillary information. For example it could also be used to display charts, labels, popups (maybe something similar is what Terence Tsang had in mind when he mentioned them?), and other visual gadgets at specific times during playback. They are different from links in the sense that they do not start new video or open a new view, etc. They are also different from items that tell the player to go to a specific point in the video. The captions (and probably we should call them something else, more encompassing or general) just display extra data.

With this new model, I see the captioning engine split into two components that could even be in separate modules. One component is tasked with understanding caption-description formats (SAMI, SMIL, etc.) and transforming the information into an event dictionary, or whatever we decide to call the lists of events/markers for a video, usable by the event bus/manager.  Conceivably, the component would work with pluggable parsers so that it can deal with any suitable or custom format.

The other component is a visualizer which will actually render the “caption” information. This component subscribes to the event bus and knows how to handle caption events.  The normal, vanilla one would just be an overlay on the video component just like TV captions. But there is no reason why the captions have to be on top of the video, they could be displayed in another frame (if I understand the frame/viewer concept correctly). This will allow for even multiple viewers specialized for different tasks like traditional captions, tags (is that what the logo of a TV station, for example, is called?), graphics, etc.

Both components would work as pairs, that is, the visualizer must be able to render the data as indicated by the parser. Oh, and there must also be a visualizer capable of processing captions embedded in the video as markers.

So that is the general gist of the way I see Closed Captions now. Sorry for the rambling but I am sort of brainstorming as I type this. I’d love some feedback on this: suggestions, ideas, criticisms, anything. Thanks.

David Mora