00:34:13 Stephanie Haro: How do we find dataCND? If we don’t have it? 00:37:22 Stephanie Haro: Great questions! πŸ˜„ 00:40:07 Aaron Gibbings: Mac comment = command + / Mac uncomment = command + t 00:41:27 Giovanni Di Liberto: Please feel free to interrupt if you have questions 00:42:37 Brian Man: I am unable to find pre_dataSub10.mat I only have dataSub10.mat. Are they the same? 00:44:07 Brian Man: but it says they have different samplng frequencies 00:44:13 Brian Man: Error: EEG and STIM have different sampling frequency 00:44:42 Camila Zugarramurdi: maybe you can sharte it here in the chat 00:49:08 Giovanni Di Liberto: Let me know in the chat if you have issues running the preprocessing 00:49:59 Brian Man: so we basically run the preprocessing section in the example1 forwardTRF script? And then go back to the tutorial script to run the section that he is running? 00:50:06 Giovanni Di Liberto: exactly 00:50:16 Brian Man: Reacted to "exactly" with πŸ‘ 00:53:01 Giovanni Di Liberto: Any questions? Let us know if you don't know what overfitting means 00:53:55 Zhen ZENG: Error using mTRFtrain STIM and RESP arguments must have the same number of observations. 00:55:11 Giovanni Di Liberto: Could you tell me if you have preprocessed the data or changed anything in the analysis code? 00:55:38 Zhen ZENG: data preprocessed, haven't changed the code 00:55:55 Camila Zugarramurdi: can you explain lambda a bit slower? are you testing many lambdas for each subject? trial? model?...what statistic do you use to select the best one? 00:56:03 Giovanni Di Liberto: mmm, that would mean that the stimulus and the EEG don't have the same length 00:56:25 Giovanni Di Liberto: Could you have a look at the length of the first cell in stim and resp please? 00:57:41 Zhen ZENG: the preprocessed files have a sampling rate of 128 Hz but the stimuli were 64 Hz, these were form the codes 00:57:56 Giovanni Di Liberto: splendid. So that's the issue 00:58:10 Giovanni Di Liberto: you'll need to preprocess the data so that the downsampling frequency is 64 Hz 00:58:32 Giovanni Di Liberto: in any case, stim and resp must have the same sampling frequency 00:58:35 giorgia cantisani: Replying to "can you explain lamb..." feel free to unmute and interact! 00:58:39 Giovanni Di Liberto: there is a variable, downFs 00:58:43 Giovanni Di Liberto: for that 01:01:12 Zhen ZENG: Thanks 01:01:36 Camila Zugarramurdi: thanks 01:01:52 Brian Man: Can I ask in general, how you would deal with stimulus that have different length? Because lots of speech material vary in length. Do you just cut all data such that all samples are within the active listening part? 01:02:33 Giovanni Di Liberto: Replying to "Can I ask in general..." do you mean different length between trials, or stimuli that are shorter than the EEg signal? 01:02:33 CNSP: https://youtu.be/Q81RR3yKn30?si=sTiF74mfNlnO5GpG 01:02:53 Brian Man: Replying to "Can I ask in general..." between trials 01:03:22 Giovanni Di Liberto: Replying to "Can I ask in general..." That's not a problem. The mTRF-toolbox can deal with trials of different length 01:04:18 Brian Man: Replying to "Can I ask in general..." I see. Thanks! 01:04:35 Giovanni Di Liberto: Replying to "Can I ask in general..." So, the first trial could be 2 min, and the second one 3 min, and so on. CND includes that by default 01:04:39 Giovanni Di Liberto: Reacted to "I see. Thanks!" with πŸ‘ 01:04:43 Marianne de Heer Kloots: Is this model only using the stimulus envelope or do you combine the different features in one model? I’m wondering how the comparison between continuous and discrete features works, and also discrete features of different time ranges (like word vs. phonetic features) 01:05:16 Anton Rogachev: Can you tell me what is the optimal number of trails for mTRF, or does it not matter? If I have 2-3 long trails (5-7 minutes), will cross-validation and training pass correctly? 01:05:27 Anton Rogachev: Reacted to "Is this model only u..." with πŸ‘ 01:05:34 Giovanni Di Liberto: Replying to "Is this model only u..." Great point. Maybe it's something to discuss after this part? For now, I think it's all envelope 01:06:08 Giovanni Di Liberto: Replying to "Is this model only u..." We had a tutorial about this last year (banded regression and so on). Let's see if Aaron makes it in time to talk about that 01:06:34 Marianne de Heer Kloots: Replying to "Is this model only u..." Ok thanks! That would be great, I was just wondering if maybe I missed something already πŸ˜„ 01:06:55 Marianne de Heer Kloots: Reacted to "We had a tutorial ab..." with πŸ‘ 01:07:11 giorgia cantisani: Replying to "Can you tell me what..." Now Aaron is gonna talk about cross-validation! 01:07:26 Mamady Nabe: Reacted to "Is this model only u..." with πŸ‘ 01:07:38 Giovanni Di Liberto: Replying to "Can you tell me what..." That depends on how much data is available in total. If you have enough data, that does not matter too much. Typically, we tend to use 1-min trials, 2-min trials, for sake of validation 01:08:16 Giovanni Di Liberto: Replying to "Is this model only u..." We are just going one step at a time. But please bring it up again later! 01:08:25 Giovanni Di Liberto: Reacted to "Ok thanks! That woul..." with πŸ‘ 01:08:27 Anton Rogachev: Replying to "Can you tell me what..." Thank you! 01:08:34 Marianne de Heer Kloots: Reacted to "We are just going on..." with πŸ‘ 01:12:43 Giovanni Di Liberto: Replying to "Is this model only u..." oops, He is combining features already! 01:27:13 Emilia FlΓ³: can you normalize the weights? 01:27:25 Giovanni Di Liberto: good question 01:27:38 Giovanni Di Liberto: mTRFaverage has a parameter 01:27:47 Giovanni Di Liberto: for normalising before averaging 01:27:58 Giovanni Di Liberto: normalising across subjects 01:28:03 Giovanni Di Liberto: not across electrodes 01:30:13 Giovanni Di Liberto: I found the normalising across subjects very useful in baby data 01:30:23 Giovanni Di Liberto: where the data was really really noisy (and little) 01:30:32 Giovanni Di Liberto: but in general, I absolutely agree with Aaron and Mick 01:31:23 Emilia FlΓ³: Thank you! 01:47:00 Stephanie Haro: Thank you! 01:47:04 CNSP: Reacted to "Thank you!" with πŸ‘ 01:50:19 Giovanni Di Liberto: Please feel free to tell us how you plan to use these methods, if you have questions about that 01:55:06 Marianne de Heer Kloots: I have to leave now, but thanks a lot! That was very helpful 01:55:09 Sandrien van Ommen: a bit of a zoomed-out question rather. how does the mTRF compare to GAMM models in your opinion? 01:56:47 Giovanni Di Liberto: Replying to "a bit of a zoomed-ou..." I think GAMM are non-linear, isn't that the case? 01:57:41 Giovanni Di Liberto: Replying to "a bit of a zoomed-ou..." mTRFs are linear models (we have to apply non-linearities ourselves when extracting the features) 01:58:14 Sandrien van Ommen: Replying to "a bit of a zoomed-ou..." yeah I mean actually the idea to use the non-linearity to estimate the development of the response over time. but I guess it is much coarser than mTRF. I have done neither so I'm just wondering in a very abstract way 01:58:47 Brian Man: I was wondering what are some ways you could define visual features. Apart from mouth envelope I wonder if you could define features according to place/manner of articulation? If so how would you do it? 01:59:54 Giovanni Di Liberto: The connection was bad and I lost the question. But if the question was about time alignment imprecision and TRFs, check out this paper: https://www.sciencedirect.com/science/article/pii/S0165027022002916 02:00:38 Giovanni Di Liberto: Replying to "I was wondering what..." Yes. We used visemes in the past. Check out Edmund Lalor and Aisling O'Sullivan 02:01:48 Camila Zugarramurdi: Reacted to "The connection was b..." with πŸ‘ 02:02:09 Mamady Nabe: Reacted to "The connection was b..." with πŸ‘ 02:04:50 Brian Man: Reacted to "Yes. We used visemes..." with πŸ‘ 02:05:22 Brian Man: I need to head out now. Thank you very much for the helpful tutorial and see you tomorrow! 02:05:30 Stephanie Haro: Reacted to "I need to head out n..." with πŸ‘πŸΌ 02:07:43 Giovanni Di Liberto: Thank you all for the brilliant session! I'll have to log off. See you all tomorrow! 02:08:06 CNSP: Reacted to "Thank you all for th..." with 🀌 02:09:19 Stephanie Haro: I also have to head out. Thank you everyone! 02:10:24 Camila Zugarramurdi: Gotta go. Thanks for everything, see you tomorrow. 02:11:22 Mick Crosse: I have to go. Thanks Aaron for a great session! 02:12:29 Anton Rogachev: Thanks! 02:12:45 Ellie Ambridge: Thanks for the great session 02:18:56 Orel Levy: Thanks! 02:18:57 Chelsea Blankenship: Thanks for the session! 02:19:07 Rae Hoeppner (They/Them): Thanks!! 02:19:13 Thaiz Priscilla SΓ‘nchez: Thanks!! 02:19:22 Aaron Gibbings: Thanks so much Aaron. Great session. 02:19:40 Aaron Gibbings: I did notice that! 02:19:55 yuval perelmuter: thank you!