00:09:29 Marlies Gillis: โ€จhttps://drive.google.com/drive/folders/1v2hXcOYmBGDsneqUXAKO0EINGfmObrE7?usp=sharingโ€จ 00:10:38 giorgia cantisani: https://drive.google.com/drive/folders/1v2hXcOYmBGDsneqUXAKO0EINGfmObrE7?usp=sharing 00:10:39 CNSP: https://drive.google.com/drive/folders/1v2hXcOYmBGDsneqUXAKO0EINGfmObrE7?usp=sharing 00:11:36 giorgia cantisani: https://drive.google.com/drive/folders/1v2hXcOYmBGDsneqUXAKO0EINGfmObrE7?usp=sharing 00:13:07 giorgia cantisani: Please rise your hand if you have any question! The tutorial will be interactive, so feel free to ask questions anytime 00:13:23 Brian Man: sorry how do I open with google collab from google drive? 00:13:36 Chelsea Blankenship: Reacted to "sorry how do I open ..." with ๐Ÿ‘ 00:13:41 Andre Palacios Duran: how to add to drive? 00:13:45 Ellie Ambridge: Reacted to "sorry how do I open ..." with ๐Ÿ‘ 00:14:12 Chelsea Blankenship: Replying to "sorry how do I open ..." I have the same question as well 00:15:04 giorgia cantisani: you can click on the name of the folder, click organize and "add shortcut" 00:16:41 Andre Palacios Duran: Reacted to "you can click on the..." with ๐Ÿ‘ 00:16:54 CNSP: Replying to "sorry how do I open ..." I just double clicked the code 00:16:55 Ellie Ambridge: Replying to "sorry how do I open ..." I think you need to go to 'open with', then 'connect more apps' and download 'google colaboratory' 00:17:03 Ellie Ambridge: Replying to "sorry how do I open ..." only take a few seconds 00:17:41 giorgia cantisani: Replying to "sorry how do I open ..." Just double-click on the file. If you have mac, then you should install the plugin 00:18:23 giorgia cantisani: Is everyone on board with opening the colab? 00:19:04 Brian Man: Replying to "sorry how do I open ..." oh I see. I was on the edge browser on this pc I just had to switch over to chrome 00:19:16 Andre Palacios Duran: yes but now not sure where the repository she mentioned can be found 00:19:17 Brian Man: Replying to "sorry how do I open ..." because I couldn't find colab from edge 00:19:38 giorgia cantisani: Replying to "sorry how do I open ..." yes, chrome should just work fine 00:20:47 giorgia cantisani: you should now find the folder in your drive 00:21:04 giorgia cantisani: if you don't find it, another way would be: 00:21:29 giorgia cantisani: (1) Make a folder in your own google Drive called "myOwnCopy_CNSP_Tutorials" (2) Go to view only "CNSP_Tutorial_Track_1" folder: CNSP_workshop drop down menu -> Download (3) In your local machine, unzip folder (2) In your own Google Drive folder "myOwnCopy_CNSP_Tutorial_Track_1": Drag unzipped folder that contains data and collaboratory notebook file 00:23:30 Marlies Gillis: CNSP_Tutorial_Track_1 00:26:34 Brian Man: still taking time to install eelbrain 00:28:17 Brian Man: FileNotFoundError: [Errno 2] No such file or directory: '/mnt/drive/MyDrive/CNSP_workshop/data/stimuli/audiobook_1_64Hz_binned_spectrogram.pickle' 00:28:39 Marlies Gillis: CNSP_Tutorial_Track_1 00:31:04 Brian Man: I added it to the directory and tried to run but it opened a new window for me 00:31:54 Brian Man: yes 00:32:05 danna pinto: it didnโ€™t work for me 00:32:38 Brian Man: will try now 00:32:51 giorgia cantisani: DATA_ROOT should be exactely the folder in your drive where you saved the files 00:33:15 Maya ืžื™ื” ู…ุงูŠุง: Hey. Thanks for the materials (and the great talk from earlier!) I have a theoretical Q about acoustic onsets: is there a recommended paper investigating the relation of the acoustic onsets with actual linguistic elements? 00:33:34 giorgia cantisani: By default should be "/mnt/drive/MyDrive/CNSP_Tutorial_Track_1" 00:33:44 Ofek Ben Abu: can you send the answers please ? we need those variables to move on 00:34:10 CNSP: I'll try to paste those as I get them 00:34:12 CNSP: acoustic_onsets = spectrogram.diff('time').clip(0) 00:34:12 giorgia cantisani: if you put it in a specific folder in your drive, then: "/mnt/drive/MyDrive/name_of_the_subfolder/CNSP_Tutorial_Track_1" 00:34:19 Ofek Ben Abu: Reacted to "I'll try to paste th..." with ๐Ÿ‘ 00:34:26 CNSP: p = eelbrain.plot.Array(acoustic_onsets.sub(time=(97,100))) 00:35:40 Ofek Ben Abu: Reacted to "p = eelbrain.plot.Ar..." with ๐Ÿ™ 00:36:31 giorgia cantisani: Replying to "Hey. Thanks for the ..." I'll ask this asap. But if you rise your hand and ask it yourself I think Marlies would be happy to have some interaction :) 00:39:09 CNSP: Replying to "Hey. Thanks for the ..." Definitely get some input from Marlies, but from my experience, I've only seen onsets used as an acoustic feature to baseline linguistic features. They're definitely temporally correlated, which is why we have to use the baseline acoustic features. 00:41:51 Maya ืžื™ื” ู…ุงูŠุง: Reacted to "I'll ask this asap. ..." with ๐Ÿ™ 00:41:55 Maya ืžื™ื” ู…ุงูŠุง: Reacted to "Definitely get some ..." with ๐Ÿ™ 00:45:40 Brian Man: yes it is fine now. Thank you :) 00:45:46 Brian Man: sorry for the trouble 00:45:55 giorgia cantisani: Reacted to "sorry for the troubl..." with ๐Ÿ‘ 00:47:10 giorgia cantisani: Replying to "sorry for the troubl..." no worries :) 00:51:19 CNSP: Here was my solution if anyone wants to copy: 00:51:20 CNSP: phoneme_onset_times = [interval.xmin for interval in grid['MAU'] if interval.text not in ['', '

', '', '']] time = eelbrain.UTS.from_int(0, np.ceil(grid.xmax*fs), fs) stimulus_phonemeOnsets = eelbrain.NDVar(np.zeros(len(time)), time, name='phoneme onsets') stimulus_phonemeOnsets[phoneme_onset_times] = 1 p = eelbrain.plot.UTS([[spectrogram.mean('frequency', name='avg_spectrogram').sub(time=(97, 100)), stimulus_wordOnsets.sub(time=(97, 100)), stimulus_phonemeOnsets.sub(time=(97, 100))]]) 00:51:49 CNSP: I don't know what I'm doing in python so I provide no warranty 01:08:22 CNSP: phoneme_surprisal = eelbrain.load.unpickle(os.path.join(STIMULI_DIR, 'audiobook_1_64Hz_phoneme_surprisal_subtlex.pickle')) word_surprisal = eelbrain.load.unpickle(os.path.join(STIMULI_DIR, 'audiobook_1_64Hz_word_surprisal_ngram.pickle')) word_frequency = eelbrain.load.unpickle(os.path.join(STIMULI_DIR, 'audiobook_1_64Hz_word_frequency_ngram.pickle')) p = eelbrain.plot.UTS(phoneme_surprisal.sub(time=(97, 100))) 01:08:40 CNSP: p = eelbrain.plot.UTS([[spectrogram.mean('frequency', name='avg_spectrogram').sub(time=(97, 100)), phoneme_surprisal.sub(time=(97, 100)), stimulus_phonemeOnsets.sub(time=(97, 100))+1, stimulus_wordOnsets.sub(time=(97, 100))+2, word_surprisal.sub(time=(97, 100))+3, word_frequency.sub(time=(97, 100))+4, ]]) 01:08:50 CNSP: oooof formatting. 01:13:46 Brian Man: Reacted to "oooof formatting." with ๐Ÿ˜‚ 01:20:08 Maya ืžื™ื” ู…ุงูŠุง: Hi, my apologies, I have to disconnect early unexpectedly. Thanks again! 01:22:10 CNSP: Reacted to "Hi, my apologies, I ..." with ๐Ÿ‘‹ 01:22:17 CNSP: results = [] for basis in [0, 0.05, 0.1, 0.15]: r_b = eelbrain.boosting(eeg.sub(time=(0, 60)), spectrogram.sub(time=(0, 60)), tstart=0, tstop=0.5, partitions=5, test=0, basis=basis) results.apend(r_b) 01:22:37 Brian Man: i am a bit lost after the butterfly plot. What exactly does the basis kernel refer to? 01:23:25 CNSP: The shape of the "bell curve" looking thing. 01:23:44 Andre Palacios Duran: so is the "kernel" simply an impulse in this case? 01:24:06 CNSP: It's more like the shape applied to an impulse 01:24:19 CNSP: results = [] for basis in [0, 0.05, 0.1, 0.15]: r_b = eelbrain.boosting(eeg.sub(time=(0, 60)), spectrogram.sub(time=(0, 60)), tstart=0, tstop=0.5, partitions=5, test=0, basis=basis) results.append(r_b) 01:25:49 Andre Palacios Duran: so the "basis" parameter specifies the width of a hamming window, which is the kernel, correct? 01:26:04 Brian Man: Replying to "so the "basis" param..." that's how I understand it 01:26:13 CNSP: Replying to "so the "basis" param..." yep 01:26:57 Brian Man: for what reason would you like the trf to be smoother? 01:27:02 Brian Man: or not in any case 01:28:40 Brian Man: I see. yes that's very clear thank you! 01:36:41 CNSP: r_2 = eelbrain.boosting(eeg.sub(time=(0, 180)), [spectrogram.sub(time=(0, 180)), stimulus_phonemeOnsets.sub(time=(0,180))], tstart=0, tstop=0.5, basis=0.05, partitions=5, test=1, selective_stopping=1) 01:37:16 Brian Man: will there be answers to the exercises in the end? 01:41:30 giorgia cantisani: Replying to "will there be answer..." We can ask Marlies! 01:51:48 CNSP: rows_linguistic_tracking = [] for subject in np.unique(ds_prediction_accuracy['subject']): complete = ds_prediction_accuracy[np.logical_and(ds_prediction_accuracy['subject']==subject, ds_prediction_accuracy['model']=='complete')] assert complete.shape[0]==1 baseline = ds_prediction_accuracy[np.logical_and(ds_prediction_accuracy['subject']==subject, ds_prediction_accuracy['model']=='baseline')] assert baseline.shape[0]==1 added_value = complete['prediction_accuracy']-baseline['prediction_accuracy'] rows_linguistic_tracking.append([subject,added_value]) 01:53:48 CNSP: Didn't get the plot function for the next plot, but here's the other line: 01:53:48 CNSP: ds_linguistic_tracking = eelbrain.Dataset.from_caselist(['subject','added_value'],rows_linguistic_tracking,random='subject') 01:57:59 CNSP: for feature, t in zip(['cohort_entropy_subtlex','word_surprisal_ngram'],[0.3, 0.3]): p = eelbrain.plot.TopoButterfly('trf',ds=ds_trf[ds_trf['feature']==feature],t=t, title=feature) 02:00:22 Brian Man: I probably will have quite a bit in the future but need to really sit down and think it through first. Thank you very much for the great tutorial! 02:05:10 Chelsea Blankenship: Thank you so much for the tutorial! 02:05:32 Aaron Gibbings: Fantastic workshop Marlies!!! So much info! Iโ€™ll be sure to reach out when I start using these analyses! Thank you!! 02:06:55 CNSP: We'll have to end after this question so that we can start the final session.