hachyderm.io is one of the many independent Mastodon servers you can use to participate in the fediverse.
Hachyderm is a safe space, LGBTQIA+ and BLM, primarily comprised of tech industry professionals world wide. Note that many non-user account types have restrictions - please see our About page.

Administered by:

Server stats:

9K
active users

#linearmodels

0 posts0 participants0 posts today
Benedikt Ehinger<p>‼ Announcement: Online Unfold.jl workshop ‼</p><p>📅 09.05.2025<br>💶 Free!<br>👉🏼 <a href="https://github.com/s-ccs/workshop_unfold_2025" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">github.com/s-ccs/workshop_unfo</span><span class="invisible">ld_2025</span></a><br>❓ rERPs, mass univariate models &amp; deconvolution!</p><p>If you are interested in combined <a href="https://scholar.social/tags/EEG" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EEG</span></a> / <a href="https://scholar.social/tags/EyeTracking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EyeTracking</span></a>, in natural experiments, sequential sampling models + EEG (e.g. DriftDiffusion), <a href="https://scholar.social/tags/VR" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>VR</span></a>+EEG, - this could be a useful workshop for you!</p><p><a href="https://scholar.social/tags/EEG" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EEG</span></a> <a href="https://scholar.social/tags/linearmodels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linearmodels</span></a> <a href="https://scholar.social/tags/statistics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>statistics</span></a> <br> <a href="https://scholar.social/tags/julialang" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>julialang</span></a> </p><p>Organized with Romy Frömer (CHBH)<br> and the S-CCS lab (@uni_stuttgart)</p>
Valeriy M., PhD, MBA, CQF<p>This study shows that in time series forecasting, a carefully designed **linear model can be all you need**. </p><p>🔗 Read the full paper: [arXiv:2502.03571](<a href="https://arxiv.org/pdf/2502.03571" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/pdf/2502.03571</span><span class="invisible"></span></a>) </p><p><a href="https://sigmoid.social/tags/TimeSeries" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TimeSeries</span></a> <a href="https://sigmoid.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://sigmoid.social/tags/Forecasting" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Forecasting</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://sigmoid.social/tags/LinearModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LinearModels</span></a> <a href="https://sigmoid.social/tags/DataScience" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DataScience</span></a></p>
brozu ▪️<p>📈 Models simplify complex observations by filtering out details that might not generalize to new instances, but… simplification requires assumptions. </p><p>Take <a href="https://mastodon.uno/tags/LinearModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LinearModels</span></a>: they assume data is fundamentally linear, dismissing deviations as mere noise. </p><p>The art lies in knowing what to keep and what to discard. </p><p><a href="https://mastodon.uno/tags/DataScience" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DataScience</span></a> <a href="https://mastodon.uno/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.uno/tags/ml" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ml</span></a> <a href="https://mastodon.uno/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a></p>
David Colarusso<p>"The Robust Beauty of Improper Linear Models in Decision Making" lives rent free in my mind. I think about this paper from 1979 ALL. THE. TIME! </p><p>TL;DR: experts can make robust linear models by just picking a few salient features from their experience. See <a href="https://www.cmu.edu/dietrich/sds/docs/dawes/the-robust-beauty-of-improper-linear-models-in-decision-making.pdf" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">cmu.edu/dietrich/sds/docs/dawe</span><span class="invisible">s/the-robust-beauty-of-improper-linear-models-in-decision-making.pdf</span></a></p><p>In today's parlance the TL;DR would read "feature selection is really important."</p><p><a href="https://mastodon.social/tags/DataScience" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DataScience</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/LinearModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LinearModels</span></a></p>
Daniel Heck<p>In today's lecture on <a href="https://mastodon.social/tags/StatisticalModeling" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>StatisticalModeling</span></a>, I explained how to define meaningful non-orthogonal hypotheses/contrasts in (generalized) <a href="https://mastodon.social/tags/LinearModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LinearModels</span></a>.</p><p>I only learned about the difference between specifying a contrast matrix vs. a hypothesis matrix in this paper:</p><p>How to capitalize on a priori contrasts in linear (mixed) models<br>(by Daniel Schad et al., 2020) <br><a href="https://doi.org/10.1016/j.jml.2019.104038" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="ellipsis">doi.org/10.1016/j.jml.2019.104</span><span class="invisible">038</span></a></p><p>Preprint: <a href="https://arxiv.org/abs/1807.10451" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/1807.10451</span><span class="invisible"></span></a></p>