I've attempted the Zoom feature of sharing screen (specifically using the 'share audio only' feature) and it is not working for me - or rather it is technically working and the music does sound better- but the sync is WAY out of whack. Basically, the package of video and audio I create and hear is what I would want my students to see and hear, otherwise it's a recipe for disaster! I first thought the out-of-sync-ness might be because I was wearing airpods, but it is still off when I use wired headphones. I should also say that I DO NOT MIND LATENCY - I don't expect to have my students be dancing with me in real-time - my primary and only goal is to just make sure that whatever my students are seeing is synched up between the (1) video (2) instruction (usb mic / my voice) and (3) music (Spotify, running direct via soundflower or other). But the sync is slightly off, and with this style of dance (tap) I need it to be precise. Thanks to message board postings here I am pretty happy with the quality of the sound of both the mic and the music. Additionally, if possible I would love to have the music just pump into my ears (airpods would be an ideal monitor if bluetooth synching can be achieved), so that there is NO music in the room (so that the USB mic picks up only my voice and tap shoe sounds, not any music via a bluetooth speaker etc). What I am specifically attempting to do is use OBS' FINAL MIXED AUDIO (including an intentional/desired Sync Offset delay for reasons listed below) to send to Zoom for streaming to my students. And the Sync Offset feature doesn't appear to be working with my current settings I have been using. With my current settings I am experiencing a sync issue - the music is hitting a smidge earlier than the video and voice (the video and voice (mic) seem to be in sync). With the goal being to up the quality of the music for my classes (as opposed to placing a bluetooth speaker near my USB mic and having the audio picked up externally). You could do that with Logic, Live or any app with multiple input possibilities, even Renoise.I'm a dance teacher and am aiming to sync up my mic (USB snowball) with Spotify (running it into OBS via Soundflower) and port all that (along with the video) to Zoom in a synchronized fashion, with the best possible quality audio. I use an Aggregate Device to combine multiple soundcards together, be they physical or virtual, so I would create an Aggregate Device with physical-soundcard-inputs and Soundflower64ch inputs, and still have Pure Data send to soundflower64ch, and Ableton Live receive from Aggregate Device - then you map the inputs you want to use to the tracks you want them to play in, and there you go. This way you can have individual sounds going into individual audiotracks, being run through individual efx, etc. So you would set Pure Data to 64ch, say, outputs 1-16, then Ableton Live you'd configure to listen to 64ch and the inputs 1-16, and create either 8 or 16 audio tracks that are listening to 64ch-1 to 64ch-16. Pure Data can have multiple stereo outputs (and mono outputs, of course).Ībleton Live can receive the multiple stereo outputs being sent-to-by-Pure Data, and then you can have live output from Pure Data going into Ableton Live and process it.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |