profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/t-mullen/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Thomas Mullen t-mullen Toronto ON Video streaming, P2P and collaborative editors.

feross/simple-peer 5717

📡 Simple WebRTC video, voice, and data channels

feross/drag-drop 456

HTML5 drag & drop for humans

t-mullen/hls-server 160

Middleware for serving HTTP Live Streaming (HLS) compatible media streams.

multihack/multihack-web 88

Realtime collaboration for programmers. (Web Version)

t-mullen/hyperhost 46

P2P Node Servers in the Browser

multihack/multihack-brackets 24

Realtime collaboration for programmers. (Brackets Extension)

multihack/multihack-server 19

Realtime collaboration for programmers. (Server)

t-mullen/fluent-ffmpeg-multistream 13

Multiple stream inputs/outputs in fluent-ffmpeg.

multihack/multihack-vscode 12

Realtime collaboration for programmers. (VSCode Extension)

t-mullen/encodji 4

Serialize Javascript objects into user-friendly emoji strings.

PR opened versatica/mediasoup-client

Add missing "active" property to RtpEncodingParameters type

Description

Adds the missing "active" property to RtpEncodingParameters which is otherwise identical to RTCRtpEncodingParameters.

Usecase

Calling Producer::setRtpEncodingParameters to activate/deactivate encodings.

+3 -1

0 comment

3 changed files

pr created time in 2 months

create barnchhopin-team/mediasoup-client

branch : feat/add-active

created branch time in 2 months

Pull request review commentversatica/mediasoup-client

Allow setting RtpEncodingParameters for individual encodings

 export class Chrome67 extends HandlerInterface  		const parameters = rtpSender.getParameters(); -		parameters.encodings.forEach((encoding: RTCRtpEncodingParameters, idx: number) =>+		if (Array.isArray(params))  		{-			parameters.encodings[idx] = { ...encoding, ...params };-		});+			parameters.encodings.forEach((encoding: RTCRtpEncodingParameters, idx: number) =>+			{

This is the way property assignment was done before. It seems to be correct:

const oldObj = { x: 1 }
const newObj = { x: 2 }
{ ...oldObj, ...newObj } // { x: 2 }
t-mullen

comment created time in 2 months

PullRequestReviewEvent

push eventhopin-team/mediasoup-client

t-mullen

commit sha 8bef6e79e5bca00743b1f947ffd2ec3d12286fc0

fix: clone params array before reversing

view details

push time in 2 months

PR opened versatica/mediasoup-client

Allow setting RtpEncodingParameters for individual encodings

Description

Changes Producer::setRtpEncodingParameters to also accept an array of RTCRtpEncodingParameters, for setting different parameters for each encoding. Passing a single RTCRtpEncodingParameters object will work as before.

Usecases

I have a few uses for this - just some examples:

  1. Disabling lower-indexed encodings (similar to setMaxSpatialLayer but reversed).
  2. Modifying scaleResolutionDownBy depending on the current source resolution of the track.
+182 -52

0 comment

25 changed files

pr created time in 2 months

create barnchhopin-team/mediasoup-client

branch : feat/setRtpEncodingParam-2

created branch time in 2 months

pull request commentversatica/mediasoup-client

Allow setting RtpEncodingParameters for individual encodings

After some discussion elsewhere, going to reopen with an a slightly different API.

t-mullen

comment created time in 2 months

PR closed versatica/mediasoup-client

Allow setting RtpEncodingParameters for individual encodings

Description

Adds an optional second parameter to Producer::setRtpEncodingParameters that allows applications to specify the index of the encoding to modify. Not providing this second parameter will modify all encodings just like before.

Usecases

I have a few uses for this - just some examples:

  1. Disabling lower-indexed encodings (similar to setMaxSpatialLayer but reversed).
  2. Modifying scaleResolutionDownBy depending on the current source resolution of the track.
+181 -102

1 comment

50 changed files

t-mullen

pr closed time in 2 months

push eventhopin-team/mediasoup-client

t-mullen

commit sha 15f70de0e8adc6991e3d7406d0523339b6513a68

fix: reverse encoding index for firefox

view details

push time in 2 months

PR opened versatica/mediasoup-client

Allow setting RtpEncodingParameter for individual encodings

Description

Adds an optional second parameter to Producer::setRtpEncodingParameters that allows applications to specify the index of the encoding to modify. Not providing this second parameter will modify all encodings just like before.

Usecases

I have a few uses for this - just some examples:

  1. Disabling lower-indexed encodings (similar to setMaxSpatialLayer but reversed).
  2. Modifying scaleResolutionDownBy depending on the current source resolution of the track.
+177 -102

0 comment

50 changed files

pr created time in 2 months

create barnchhopin-team/mediasoup-client

branch : feat/setRtpEncodingParam-individual

created branch time in 2 months

push eventhopin-team/mediasoup-client

Iñaki Baz Castillo

commit sha 58bef3bbe695f80f36d930ad76878089437bebf0

Update deps

view details

Iñaki Baz Castillo

commit sha 1dfa30ade9b2f7a1fb80964cd6135d98fd049cd5

Remove deprecated priority field from DataChannel parameters (fixes #167)

view details

Iñaki Baz Castillo

commit sha a813c39bde5c9493436d2f6c7af7fa310a96decd

3.6.31

view details

Iñaki Baz Castillo

commit sha 02fe189d591aa71e25f4879df9eeacd45df5fd73

3.6.32

view details

José Luis Millán

commit sha 1805fc1362243dcd3695bca2caf0ee812bf800cb

package.json: make 'fake-mediastreamtrack' a dependency (#170) FakeHandler directly depends on it. Right now, a FakeHandler usage in a project dependant on mediasoup-client will fail because 'fake-mediastreamtrack' is unreachable.

view details

Iñaki Baz Castillo

commit sha 0ccca08e04bac0bb116987dc6f2561c9f9bda8bc

3.6.33

view details

José Luis Millán

commit sha 1783d36a71dd26e24397b5fb240ac50465aa8b2a

Plan b fix (#171) MediaSection: fix planBReceive So a media section can contain different codecs for the same media type.

view details

José Luis Millán

commit sha 05af1fedc5aafd300818cb6cfe6e105c9784cfdf

3.6.34

view details

Iñaki Baz Castillo

commit sha ad4af60f61ee3ccf2e2c532f89620f4f505bbf2c

Update deps

view details

Iñaki Baz Castillo

commit sha a130df06673ca27a995741fb7481036ef0c8110b

Expose debug dependency

view details

Iñaki Baz Castillo

commit sha ec667e94d5d4fd317c0aed27b0ac8180ae9ee5af

3.6.35

view details

José Luis Millán

commit sha 01c626afbbf2068d86df466182c6eaf654058960

MediaSection: fix planBReceive, cont. (#172) * Don't remove codec description on stop sending. * Don't add the same payload type on sending. * Don't duplicate codec descriptions upon receiving new streams

view details

José Luis Millán

commit sha 1cf7e982f9986c663c47b6ec1acd6cad15c0533d

3.6.36

view details

José Luis Millán

commit sha f22a9d05e987e83ff62c7f022b508d0ec295612d

lint

view details

José Luis Millán

commit sha 4aa873afa6be1469d11d34f1489a699c838ae081

npm-scripts: add 'release' script

view details

push time in 2 months

issue commentt-mullen/video-stream-merger

Support for mobile (react-native)

React Native doesn't provide native Canvas or WebAudio APIs, so more work is needed here.

DuyPhanQuang

comment created time in 2 months

issue commentt-mullen/video-stream-merger

Safari 14 support.

@stijndeschuymer Please close this issue if it is solved in 4.0.1

stijndeschuymer

comment created time in 2 months

issue commentt-mullen/video-stream-merger

When trying to addStream() yPosition not working

Fixed in 4.0.1

Iulian33

comment created time in 2 months

issue closedt-mullen/video-stream-merger

When trying to addStream() yPosition not working

 const merger = new VideoStreamMerger();
    merger.addStream(mediaStream, {
      x: 200,
      y: 410,
      width: 400,
      height: 225,
      mute: true
    });

not working y:410 in version @4.0.0

closed time in 2 months

Iulian33

push eventt-mullen/video-stream-merger

Harold Thetiot

commit sha 49abea9ad7acc780dbc58553bb72e5bd30e58493

fix stream.y typo #73 and prepare 4.0.1

view details

Harold Thetiot

commit sha d7e6b9140a0908cf23cef0f8efaf8fda49a1155e

build 4.0.1

view details

Thomas Mullen

commit sha 25397bf0638c813065e7e46495b0f0e80f0e94a1

Merge pull request #74 from hthetiot/gh-pages Fix addStream() y position not working

view details

push time in 2 months

PullRequestReviewEvent

issue commentt-mullen/video-stream-merger

Support Node.js

NodeJS doesn't support the Canavas or WebAudio APIs - there's more work to do on this issue.

shlok-ps

comment created time in 2 months

pull request commentt-mullen/video-stream-merger

Migrate to TypeScript, Fix audio Safari support and NodeJS Suppport

Published as 4.0.0. Thanks again!

hthetiot

comment created time in 2 months

push eventt-mullen/video-stream-merger

Harold Thetiot

commit sha f27574bca5529ad1346d8800f0e4279a177a32de

migrate to TypeScript

view details

Harold Thetiot

commit sha 4fe0e82d97412c0be74cfbeb54fa18a1b2fd9f12

make demo use webrtc-adapter

view details

Harold Thetiot

commit sha d49a252dbf541c6d60b63526af65c1701f44f206

update dist build

view details

Harold Thetiot

commit sha ffd63e18eb1873828a6e324752667f6bd9de37fd

remove .babelrc

view details

Harold Thetiot

commit sha 725faa824b9cbe60e6c8e7a7c5fe72be850c7017

make main demo use webrtc-adapter

view details

Harold Thetiot

commit sha 609d8540563ffb678fd6c9667839d2b9114b0251

update npm run start to use root as folder

view details

Harold Thetiot

commit sha 3da4e1c1a496124f4b1fcaebb40172e3690059f2

update npm run start to use root as folder

view details

Harold Thetiot

commit sha 1f22a200c965791ecda79d7dd8a368a03cb9de97

restore original npm commands

view details

Harold Thetiot

commit sha a8a0c9a00f6356ca5ff5d89df3958db6305f20d0

fix lint

view details

Harold Thetiot

commit sha 8def7421206bbdf73ec3b067e9a1c818d16125e6

fix node support

view details

Harold Thetiot

commit sha 35237acf72dd3936f89ecf6ed4ec2880559bb624

remove lagacy /index.d.ts in favor of dist/index.d.ts

view details

Harold Thetiot

commit sha ced58948ebcc3323c0ea557c6d1dcea38b65e22a

fix safari support my moving canvas creation to start so it's trigger by click/touch

view details

Harold Thetiot

commit sha 5c1592ea4f498a5286f2ce0fb39724f6ff244387

update dist

view details

Harold Thetiot

commit sha c068b3440e0b258cd78eba67280ae6539c5aa2ac

fix review: do not allocate an object in the draw loop

view details

Harold Thetiot

commit sha 23e4fd28252296bf2762402cc7e695a5f879e1b4

add viewport, missing title, and video playsinline and controls to demos

view details

Harold Thetiot

commit sha c4212505a74989a10479ff732a2aeb69ea98bf8d

update dist build

view details

Harold Thetiot

commit sha 92f02518670126bc5ac4b302eb5f25dafc6cd8aa

handle stream without audio

view details

Harold Thetiot

commit sha bc16c75ff783fa0e473911e30a0c4b4f4c194cf4

minor demos fixes, add p2p screen demo

view details

Harold Thetiot

commit sha 8207b01e4ea0e664ac8b84075ad6a3cbf347e66e

update dist build

view details

Harold Thetiot

commit sha 2f1bc34c89140773bb11b04859c73f3bdb0cd492

merge propermy original index.d.ts with documentation

view details

push time in 2 months

PR merged t-mullen/video-stream-merger

Migrate to TypeScript, Fix audio Safari support and NodeJS Suppport

ChangesLogs

  1. Migrate to TypeScript
  2. Fix Safari Audio by implemeting createConstantSource fallback
  3. Keep video ratio when rendering
  4. Fix NodeJS support
  5. Move canvas creation into start()

PR Demo URL:

  • https://hthetiot.github.io/video-stream-merger/

Tested:

  • Chrome: OK
  • MAcOS Safari Version 14.1 (16611.1.21.161.3): OK (fixed compare to original version)
  • iOS 14.4, 14.5, 15.6: OK (fixed compare to original version, but still get black stream under most conditions due ios bug)
  • Firefox 89.0.2: FAIL (same on original version)

iOS related issue for "Video Element cannot playback local Canvas.captureStream on iOS" fixed on iOS 15: - https://bugs.webkit.org/show_bug.cgi?id=181663

+5246 -2870

5 comments

18 changed files

hthetiot

pr closed time in 2 months

Pull request review commentt-mullen/video-stream-merger

Migrate to TypeScript, Fix audio Safari support and NodeJS Suppport

+declare global {+  interface Window {+    AudioContext: AudioContext;+    webkitAudioContext: any;+    VideoStreamMerger: VideoStreamMerger;+  }+  interface AudioContext {+    createGainNode: any;+  }+  interface HTMLCanvasElement {+      captureStream(frameRate?: number): MediaStream;+  }+  interface HTMLMediaElement {+    _mediaElementSource: any+  }+  interface HTMLVideoElement {+    playsInline: boolean;+  }+}++export class VideoStreamMerger {++  public width = 720;+  public height = 405;+  public fps = 25;+  private _streams: any[] = [];+  private _frameCount = 0;++  public clearRect?: (x: number, y: number, width: number, height: number) => void;+  public started = false;+  public result: MediaStream | null = null;+  public supported: boolean | null = null;++  private _canvas: HTMLCanvasElement | null = null;+  private _ctx: CanvasRenderingContext2D | null = null;+  private _videoSyncDelayNode: DelayNode | null = null;+  private _audioDestination: MediaStreamAudioDestinationNode | null = null;+  private _audioCtx: AudioContext | null = null;++  constructor(opts?: any) {++    const AudioContext = window.AudioContext || window.webkitAudioContext;+    const audioSupport = !!(window.AudioContext && (new AudioContext()).createMediaStreamDestination);+    const canvasSupport = !!document.createElement('canvas').captureStream;+    const supported = this.supported =  audioSupport && canvasSupport;++    if (!supported) {+      return;+    }++    this.setOptions(opts);++    const audioCtx = this._audioCtx = new AudioContext();+    const audioDestination = this._audioDestination = audioCtx?.createMediaStreamDestination();++    // delay node for video sync+    this._videoSyncDelayNode = audioCtx.createDelay(5.0);+    this._videoSyncDelayNode.connect(audioDestination);++    this._setupConstantNode(); // HACK for wowza #7, #10++    this.started = false;+    this.result = null;++    this._backgroundAudioHack();+  }++  setOptions(opts?: any) {+    opts = opts || {};+    this._audioCtx = (opts.audioContext || new AudioContext());+    this.width = opts.width || this.width;+    this.height = opts.height || this.width;+    this.fps = opts.fps || this.fps;+    this.clearRect = opts.clearRect === undefined ? true : opts.clearRect;+  }++  setOutputSize(width:number, height: number) {+    this.width = width;+    this.height = height;++    if (this._canvas) {+      this._canvas.setAttribute('width', this.width.toString());+      this._canvas.setAttribute('height', this.height.toString());+    }+  }++  getAudioContext() {+    return this._audioCtx;+  }++  getAudioDestination() {+    return this._audioDestination;+  }++  getCanvasContext() {+    return this._ctx;+  }++  _backgroundAudioHack() {+    if (this._audioCtx) {+      // stop browser from throttling timers by playing almost-silent audio+      const source = this._createConstantSource();+      const gainNode = this._audioCtx.createGain();+      if (gainNode && source) {+        gainNode.gain.value = 0.001; // required to prevent popping on start+        source.connect(gainNode);+        gainNode.connect(this._audioCtx.destination);+        source.start();+      }+    }+  }++  _setupConstantNode() {+    if (this._audioCtx && this._videoSyncDelayNode) {+      const constantAudioNode = this._createConstantSource();++      if (constantAudioNode) {+        constantAudioNode.start();++        const gain = this._audioCtx.createGain(); // gain node prevents quality drop+        gain.gain.value = 0;++        constantAudioNode.connect(gain);+        gain.connect(this._videoSyncDelayNode);+      }+    }+  }++  _createConstantSource() {++    if (this._audioCtx) {+      if (this._audioCtx.createConstantSource) {+        return this._audioCtx.createConstantSource();+      }++      // not really a constantSourceNode, just a looping buffer filled with the offset value+      const constantSourceNode = this._audioCtx.createBufferSource();+      const constantBuffer = this._audioCtx.createBuffer(1, 1, this._audioCtx.sampleRate);+      const bufferData = constantBuffer.getChannelData(0);+      bufferData[0] = (0 * 1200) + 10;+      constantSourceNode.buffer = constantBuffer;+      constantSourceNode.loop = true;++      return constantSourceNode;+    }+  }++  updateIndex(mediaStream: MediaStream | string | {id: string}, index: number) {+    if (typeof mediaStream === 'string') {+      mediaStream = {+        id: mediaStream+      };+    }++    index = index == null ? 0 : index;++    for (let i = 0; i < this._streams.length; i++) {+      if (mediaStream.id === this._streams[i].id) {+        this._streams[i].index = index;+      }+    }+    this._sortStreams();+  }++  _sortStreams() {+    this._streams = this._streams.sort((a, b) => a.index - b.index);+  }++  // convenience function for adding a media element+  addMediaElement(id: string, element: HTMLMediaElement, opts: any) {+    opts = opts || {};++    opts.x = opts.x || 0;+    opts.y = opts.y || 0;+    opts.width = opts.width || this.width;+    opts.height = opts.height || this.height;+    opts.mute = opts.mute || opts.muted || false;++    opts.oldDraw = opts.draw;+    opts.oldAudioEffect = opts.audioEffect;++    if (+      element instanceof HTMLVideoElement ||+      element instanceof HTMLImageElement+    ) {+      opts.draw = (ctx: CanvasRenderingContext2D, _: any, done: () => void) => {+        if (opts.oldDraw) {+          opts.oldDraw(ctx, element, done);+        } else {+          // default draw function+          const width = opts.width == null ? this.width : opts.width;+          const height = opts.height == null ? this.height : opts.height;+          ctx.drawImage(element, opts.x, opts.y, width, height);+          done();+        }+      };+    } else {+      opts.draw = null;+    }++    if (this._audioCtx && !opts.mute) {+      const audioSource = element._mediaElementSource || this._audioCtx.createMediaElementSource(element);+      element._mediaElementSource = audioSource; // can only make one source per element, so store it for later (ties the source to the element's garbage collection)+      audioSource.connect(this._audioCtx.destination); // play audio from original element++      const gainNode = this._audioCtx.createGain();+      audioSource.connect(gainNode);+      if (+        (+          element instanceof HTMLVideoElement ||+          element instanceof HTMLAudioElement+        ) && element.muted+      ) {+        // keep the element "muted" while having audio on the merger+        element.muted = false;+        element.volume = 0.001;+        gainNode.gain.value = 1000;+      } else {+        gainNode.gain.value = 1;+      }+      opts.audioEffect = (_: any, destination: AudioNode) => {+        if (opts.oldAudioEffect) {+          opts.oldAudioEffect(gainNode, destination);+        } else {+          gainNode.connect(destination);+        }+      };+      opts.oldAudioEffect = null;+    }++    this.addStream(id, opts);+  }++  addStream(mediaStream: MediaStream | string, opts?: any) {++    if (typeof mediaStream === 'string') {+      return this._addData(mediaStream, opts);+    }++    opts = opts || {};+    const stream: any = {};++    stream.isData = false;+    stream.x = opts.x || 0;+    stream.y = opts.y || 0;+    stream.width = opts.width;+    stream.height = opts.height;+    stream.draw = opts.draw || null;+    stream.mute = opts.mute || opts.muted || false;+    stream.audioEffect = opts.audioEffect || null;+    stream.index = opts.index == null ? 0 : opts.index;+    stream.hasVideo = mediaStream.getVideoTracks().length > 0;++    // If it is the same MediaStream, we can reuse our video element (and ignore sound)+    let videoElement : HTMLVideoElement | null = null;+    for (let i = 0; i < this._streams.length; i++) {+      if (this._streams[i].id === mediaStream.id) {+        videoElement = this._streams[i].element;+      }+    }++    if (!videoElement) {+      videoElement = document.createElement('video');+      videoElement.autoplay = true;+      videoElement.muted = true;+      videoElement.playsInline = true;+      videoElement.srcObject = mediaStream;+      videoElement.setAttribute('style', 'position:fixed; left: 0px; top:0px; pointer-events: none; opacity:0;');+      document.body.appendChild(videoElement);++      var res = videoElement.play();+      res.catch(() => {});++      if (this._audioCtx && !stream.mute) {+        stream.audioSource = this._audioCtx.createMediaStreamSource(mediaStream);+        stream.audioOutput = this._audioCtx.createGain(); // Intermediate gain node+        stream.audioOutput.gain.value = 1;+        if (stream.audioEffect) {+          stream.audioEffect(stream.audioSource, stream.audioOutput);+        } else {+          stream.audioSource.connect(stream.audioOutput); // Default is direct connect+        }+        stream.audioOutput.connect(this._videoSyncDelayNode);+      }+    }++    stream.element = videoElement;+    stream.id = mediaStream.id || null;+    this._streams.push(stream);+    this._sortStreams();+  }++  removeStream(mediaStream: MediaStream | string | {id: string}) {+    if (typeof mediaStream === 'string') {+      mediaStream = {+        id: mediaStream+      };+    }++    for (let i = 0; i < this._streams.length; i++) {+      const stream = this._streams[i];+      if (mediaStream.id === stream.id) {+        if (stream.audioSource) {+          stream.audioSource = null;+        }+        if (stream.audioOutput) {+          stream.audioOutput.disconnect(this._videoSyncDelayNode);+          stream.audioOutput = null;+        }+        if (stream.element) {+          stream.element.remove();+        }+        this._streams[i] = null;+        this._streams.splice(i, 1);+        i--;+      }+    }+  }++  _addData(key: string, opts: any) {+    opts = opts || {};+    const stream: any = {};++    stream.isData = true;+    stream.draw = opts.draw || null;+    stream.audioEffect = opts.audioEffect || null;+    stream.id = key;+    stream.element = null;+    stream.index = opts.index == null ? 0 : opts.index;++    if (this._videoSyncDelayNode && this._audioCtx && stream.audioEffect) {+      stream.audioOutput = this._audioCtx.createGain(); // Intermediate gain node+      stream.audioOutput.gain.value = 1;+      stream.audioEffect(null, stream.audioOutput);+      stream.audioOutput.connect(this._videoSyncDelayNode);+    }++    this._streams.push(stream);+    this._sortStreams();+  }++  // Wrapper around requestAnimationFrame and setInterval to avoid background throttling+  _requestAnimationFrame(callback: () => void) {+    let fired = false;+    const interval = setInterval(() => {+      if (!fired && document.hidden) {+        fired = true;+        clearInterval(interval);+        callback();+      }+    }, 1000 / this.fps);+    requestAnimationFrame(() => {+      if (!fired) {+        fired = true;+        clearInterval(interval);+        callback();+      }+    });+  }++  start() {++    // Hidden canvas element for merging+    const canvas = this._canvas = document.createElement('canvas');+    this._canvas.setAttribute('width', this.width.toString());+    this._canvas.setAttribute('height', this.height.toString());+    this._canvas.setAttribute('style', 'position:fixed; left: 110%; pointer-events: none'); // Push off screen+    this._ctx = this._canvas.getContext('2d');++    this.started = true;+    this._requestAnimationFrame(this._draw.bind(this));++    // Add video+    this.result = this._canvas?.captureStream(this.fps) || null;++    // Remove "dead" audio track+    const deadTrack = this.result?.getAudioTracks()[0];+    if (deadTrack) {this.result?.removeTrack(deadTrack);}++    // Add audio+    const audioTracks = this._audioDestination?.stream.getAudioTracks();+    if (audioTracks && audioTracks.length) {+      this.result?.addTrack(audioTracks[0]);+    }+  }++  _updateAudioDelay(delayInMs: number) {+    if (this._videoSyncDelayNode && this._audioCtx) {+      this._videoSyncDelayNode.delayTime.setValueAtTime(delayInMs / 1000, this._audioCtx.currentTime);+    }+  }++  _draw() {+    if (!this.started) {return;}++    this._frameCount++;++    // update video processing delay every 60 frames+    let t0  = 0;+    if (this._frameCount % 60 === 0) {+      t0 = performance.now();+    }++    let awaiting = this._streams.length;+    const done = () => {+      awaiting--;+      if (awaiting <= 0) {+        if (this._frameCount % 60 === 0) {+          const t1 = performance.now();+          this._updateAudioDelay(t1 - t0);+        }+        this._requestAnimationFrame(this._draw.bind(this));+      }+    };++    if (this.clearRect) {+      this._ctx?.clearRect(0, 0, this.width, this.height);+    }+    this._streams.forEach((stream) => {+      if (stream.draw) { // custom frame transform+        stream.draw(this._ctx, stream.element, done);+      } else if (!stream.isData && stream.hasVideo) {+        this._drawVideo(stream.element, stream);+        done();+      } else {+        done();+      }+    });++    if (this._streams.length === 0) {+      done();+    }+  }++  _drawVideo(element: HTMLVideoElement, stream: any) {+++    // default draw function+    const canvasSize = { height: this.height, width: this.width};++    const position = {+      x: stream.x || 0,+      y: stream.y || 0+    };++    const size = {+        height: stream.height || element.videoHeight || canvasSize.height,

I agree aspect ratio should be kept, but not specifying a width and height should still fullscreen the stream.

Revert this for now - will address in another PR.

hthetiot

comment created time in 3 months

PullRequestReviewEvent

Pull request review commentt-mullen/video-stream-merger

Migrate to TypeScript, Fix audio Safari support and NodeJS Suppport

+declare global {+  interface Window {+    AudioContext: AudioContext;+    webkitAudioContext: any;+    VideoStreamMerger: VideoStreamMerger;+  }+  interface AudioContext {+    createGainNode: any;+  }+  interface HTMLCanvasElement {+      captureStream(frameRate?: number): MediaStream;+  }+  interface HTMLMediaElement {+    _mediaElementSource: any+  }+  interface HTMLVideoElement {+    playsInline: boolean;+  }+}++export class VideoStreamMerger {++  public width = 720;+  public height = 405;+  public fps = 25;+  private _streams: any[] = [];+  private _frameCount = 0;++  public clearRect?: (x: number, y: number, width: number, height: number) => void;+  public started = false;+  public result: MediaStream | null = null;+  public supported: boolean | null = null;++  private _canvas: HTMLCanvasElement | null = null;+  private _ctx: CanvasRenderingContext2D | null = null;+  private _videoSyncDelayNode: DelayNode | null = null;+  private _audioDestination: MediaStreamAudioDestinationNode | null = null;+  private _audioCtx: AudioContext | null = null;++  constructor(opts?: any) {++    const AudioContext = window.AudioContext || window.webkitAudioContext;+    const audioSupport = !!(window.AudioContext && (new AudioContext()).createMediaStreamDestination);+    const canvasSupport = !!document.createElement('canvas').captureStream;+    const supported = this.supported =  audioSupport && canvasSupport;++    if (!supported) {+      return;+    }++    this.setOptions(opts);++    const audioCtx = this._audioCtx = new AudioContext();+    const audioDestination = this._audioDestination = audioCtx?.createMediaStreamDestination();++    // delay node for video sync+    this._videoSyncDelayNode = audioCtx.createDelay(5.0);+    this._videoSyncDelayNode.connect(audioDestination);++    this._setupConstantNode(); // HACK for wowza #7, #10++    this.started = false;+    this.result = null;++    this._backgroundAudioHack();+  }++  setOptions(opts?: any) {+    opts = opts || {};+    this._audioCtx = (opts.audioContext || new AudioContext());+    this.width = opts.width || this.width;+    this.height = opts.height || this.width;+    this.fps = opts.fps || this.fps;+    this.clearRect = opts.clearRect === undefined ? true : opts.clearRect;+  }++  setOutputSize(width:number, height: number) {+    this.width = width;+    this.height = height;++    if (this._canvas) {+      this._canvas.setAttribute('width', this.width.toString());+      this._canvas.setAttribute('height', this.height.toString());+    }+  }++  getAudioContext() {+    return this._audioCtx;+  }++  getAudioDestination() {+    return this._audioDestination;+  }++  getCanvasContext() {+    return this._ctx;+  }++  _backgroundAudioHack() {+    if (this._audioCtx) {+      // stop browser from throttling timers by playing almost-silent audio+      const source = this._createConstantSource();+      const gainNode = this._audioCtx.createGain();+      if (gainNode && source) {+        gainNode.gain.value = 0.001; // required to prevent popping on start+        source.connect(gainNode);+        gainNode.connect(this._audioCtx.destination);+        source.start();+      }+    }+  }++  _setupConstantNode() {+    if (this._audioCtx && this._videoSyncDelayNode) {+      const constantAudioNode = this._createConstantSource();++      if (constantAudioNode) {+        constantAudioNode.start();++        const gain = this._audioCtx.createGain(); // gain node prevents quality drop+        gain.gain.value = 0;++        constantAudioNode.connect(gain);+        gain.connect(this._videoSyncDelayNode);+      }+    }+  }++  _createConstantSource() {++    if (this._audioCtx) {+      if (this._audioCtx.createConstantSource) {+        return this._audioCtx.createConstantSource();+      }++      // not really a constantSourceNode, just a looping buffer filled with the offset value+      const constantSourceNode = this._audioCtx.createBufferSource();+      const constantBuffer = this._audioCtx.createBuffer(1, 1, this._audioCtx.sampleRate);+      const bufferData = constantBuffer.getChannelData(0);+      bufferData[0] = (0 * 1200) + 10;+      constantSourceNode.buffer = constantBuffer;+      constantSourceNode.loop = true;++      return constantSourceNode;+    }+  }++  updateIndex(mediaStream: MediaStream | string | {id: string}, index: number) {+    if (typeof mediaStream === 'string') {+      mediaStream = {+        id: mediaStream+      };+    }++    index = index == null ? 0 : index;++    for (let i = 0; i < this._streams.length; i++) {+      if (mediaStream.id === this._streams[i].id) {+        this._streams[i].index = index;+      }+    }+    this._sortStreams();+  }++  _sortStreams() {+    this._streams = this._streams.sort((a, b) => a.index - b.index);+  }++  // convenience function for adding a media element+  addMediaElement(id: string, element: HTMLMediaElement, opts: any) {+    opts = opts || {};++    opts.x = opts.x || 0;+    opts.y = opts.y || 0;+    opts.width = opts.width || this.width;+    opts.height = opts.height || this.height;+    opts.mute = opts.mute || opts.muted || false;++    opts.oldDraw = opts.draw;+    opts.oldAudioEffect = opts.audioEffect;++    if (+      element instanceof HTMLVideoElement ||+      element instanceof HTMLImageElement+    ) {+      opts.draw = (ctx: CanvasRenderingContext2D, _: any, done: () => void) => {+        if (opts.oldDraw) {+          opts.oldDraw(ctx, element, done);+        } else {+          // default draw function+          const width = opts.width == null ? this.width : opts.width;+          const height = opts.height == null ? this.height : opts.height;+          ctx.drawImage(element, opts.x, opts.y, width, height);+          done();+        }+      };+    } else {+      opts.draw = null;+    }++    if (this._audioCtx && !opts.mute) {+      const audioSource = element._mediaElementSource || this._audioCtx.createMediaElementSource(element);+      element._mediaElementSource = audioSource; // can only make one source per element, so store it for later (ties the source to the element's garbage collection)+      audioSource.connect(this._audioCtx.destination); // play audio from original element++      const gainNode = this._audioCtx.createGain();+      audioSource.connect(gainNode);+      if (+        (+          element instanceof HTMLVideoElement ||+          element instanceof HTMLAudioElement+        ) && element.muted+      ) {+        // keep the element "muted" while having audio on the merger+        element.muted = false;+        element.volume = 0.001;+        gainNode.gain.value = 1000;+      } else {+        gainNode.gain.value = 1;+      }+      opts.audioEffect = (_: any, destination: AudioNode) => {+        if (opts.oldAudioEffect) {+          opts.oldAudioEffect(gainNode, destination);+        } else {+          gainNode.connect(destination);+        }+      };+      opts.oldAudioEffect = null;+    }++    this.addStream(id, opts);+  }++  addStream(mediaStream: MediaStream | string, opts?: any) {++    if (typeof mediaStream === 'string') {+      return this._addData(mediaStream, opts);+    }++    opts = opts || {};+    const stream: any = {};++    stream.isData = false;+    stream.x = opts.x || 0;+    stream.y = opts.y || 0;+    stream.width = opts.width;+    stream.height = opts.height;+    stream.draw = opts.draw || null;+    stream.mute = opts.mute || opts.muted || false;+    stream.audioEffect = opts.audioEffect || null;+    stream.index = opts.index == null ? 0 : opts.index;+    stream.hasVideo = mediaStream.getVideoTracks().length > 0;++    // If it is the same MediaStream, we can reuse our video element (and ignore sound)+    let videoElement : HTMLVideoElement | null = null;+    for (let i = 0; i < this._streams.length; i++) {+      if (this._streams[i].id === mediaStream.id) {+        videoElement = this._streams[i].element;+      }+    }++    if (!videoElement) {+      videoElement = document.createElement('video');+      videoElement.autoplay = true;+      videoElement.muted = true;+      videoElement.playsInline = true;+      videoElement.srcObject = mediaStream;+      videoElement.setAttribute('style', 'position:fixed; left: 0px; top:0px; pointer-events: none; opacity:0;');+      document.body.appendChild(videoElement);++      var res = videoElement.play();+      res.catch(() => {});++      if (this._audioCtx && !stream.mute) {+        stream.audioSource = this._audioCtx.createMediaStreamSource(mediaStream);+        stream.audioOutput = this._audioCtx.createGain(); // Intermediate gain node+        stream.audioOutput.gain.value = 1;+        if (stream.audioEffect) {+          stream.audioEffect(stream.audioSource, stream.audioOutput);+        } else {+          stream.audioSource.connect(stream.audioOutput); // Default is direct connect+        }+        stream.audioOutput.connect(this._videoSyncDelayNode);+      }+    }++    stream.element = videoElement;+    stream.id = mediaStream.id || null;+    this._streams.push(stream);+    this._sortStreams();+  }++  removeStream(mediaStream: MediaStream | string | {id: string}) {+    if (typeof mediaStream === 'string') {+      mediaStream = {+        id: mediaStream+      };+    }++    for (let i = 0; i < this._streams.length; i++) {+      const stream = this._streams[i];+      if (mediaStream.id === stream.id) {+        if (stream.audioSource) {+          stream.audioSource = null;+        }+        if (stream.audioOutput) {+          stream.audioOutput.disconnect(this._videoSyncDelayNode);+          stream.audioOutput = null;+        }+        if (stream.element) {+          stream.element.remove();+        }+        this._streams[i] = null;+        this._streams.splice(i, 1);+        i--;+      }+    }+  }++  _addData(key: string, opts: any) {+    opts = opts || {};+    const stream: any = {};++    stream.isData = true;+    stream.draw = opts.draw || null;+    stream.audioEffect = opts.audioEffect || null;+    stream.id = key;+    stream.element = null;+    stream.index = opts.index == null ? 0 : opts.index;++    if (this._videoSyncDelayNode && this._audioCtx && stream.audioEffect) {+      stream.audioOutput = this._audioCtx.createGain(); // Intermediate gain node+      stream.audioOutput.gain.value = 1;+      stream.audioEffect(null, stream.audioOutput);+      stream.audioOutput.connect(this._videoSyncDelayNode);+    }++    this._streams.push(stream);+    this._sortStreams();+  }++  // Wrapper around requestAnimationFrame and setInterval to avoid background throttling+  _requestAnimationFrame(callback: () => void) {+    let fired = false;+    const interval = setInterval(() => {+      if (!fired && document.hidden) {+        fired = true;+        clearInterval(interval);+        callback();+      }+    }, 1000 / this.fps);+    requestAnimationFrame(() => {+      if (!fired) {+        fired = true;+        clearInterval(interval);+        callback();+      }+    });+  }++  start() {++    // Hidden canvas element for merging+    const canvas = this._canvas = document.createElement('canvas');+    this._canvas.setAttribute('width', this.width.toString());+    this._canvas.setAttribute('height', this.height.toString());+    this._canvas.setAttribute('style', 'position:fixed; left: 110%; pointer-events: none'); // Push off screen+    this._ctx = this._canvas.getContext('2d');++    this.started = true;+    this._requestAnimationFrame(this._draw.bind(this));++    // Add video+    this.result = this._canvas?.captureStream(this.fps) || null;++    // Remove "dead" audio track+    const deadTrack = this.result?.getAudioTracks()[0];+    if (deadTrack) {this.result?.removeTrack(deadTrack);}++    // Add audio+    const audioTracks = this._audioDestination?.stream.getAudioTracks();+    if (audioTracks && audioTracks.length) {+      this.result?.addTrack(audioTracks[0]);+    }+  }++  _updateAudioDelay(delayInMs: number) {+    if (this._videoSyncDelayNode && this._audioCtx) {+      this._videoSyncDelayNode.delayTime.setValueAtTime(delayInMs / 1000, this._audioCtx.currentTime);+    }+  }++  _draw() {+    if (!this.started) {return;}++    this._frameCount++;++    // update video processing delay every 60 frames+    let t0  = 0;+    if (this._frameCount % 60 === 0) {+      t0 = performance.now();+    }++    let awaiting = this._streams.length;+    const done = () => {+      awaiting--;+      if (awaiting <= 0) {+        if (this._frameCount % 60 === 0) {+          const t1 = performance.now();+          this._updateAudioDelay(t1 - t0);+        }+        this._requestAnimationFrame(this._draw.bind(this));+      }+    };++    if (this.clearRect) {+      this._ctx?.clearRect(0, 0, this.width, this.height);+    }+    this._streams.forEach((stream) => {+      if (stream.draw) { // custom frame transform+        stream.draw(this._ctx, stream.element, done);+      } else if (!stream.isData && stream.hasVideo) {+        this._drawVideo(stream.element, stream);+        done();+      } else {+        done();+      }+    });++    if (this._streams.length === 0) {+      done();+    }+  }++  _drawVideo(element: HTMLVideoElement, stream: any) {+++    // default draw function+    const canvasSize = { height: this.height, width: this.width};++    const position = {+      x: stream.x || 0,+      y: stream.y || 0+    };++    const size = {+        height: stream.height || element.videoHeight || canvasSize.height,+        width: stream.width || element.videoWidth || canvasSize.width+    };++    // TODO move to sreeam option to enable new behavior+    const keepRatio = false;

New behaviour is fine in a major version (I doubt many applications are using a warped aspect ratio anyways).

If you can remove this feature from this PR, I'll open another one to implement fixed aspect ratio. Don't want to block the rest of your fixes. 👍

hthetiot

comment created time in 3 months

PullRequestReviewEvent

issue commentt-mullen/video-stream-merger

Capture from Merger canvas appears blank when using getDisplayMedia

No, sorry - I wasn't able to find a workaround.

Yes, this should be reported to the Chromium team. Be sure to include your jsfiddle for reproduction.

nikz

comment created time in 3 months