hachyderm.io is one of the many independent Mastodon servers you can use to participate in the fediverse.
Hachyderm is a safe space, LGBTQIA+ and BLM, primarily comprised of tech industry professionals world wide. Note that many non-user account types have restrictions - please see our About page.

Administered by:

Server stats:

9K
active users

#benchmarking

0 posts0 participants0 posts today
PUPUWEB Blog<p>Meta plans to launch <a href="https://mastodon.social/tags/Llama4" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Llama4</span></a> later this month after multiple delays, citing underperformance in reasoning &amp; math benchmarks. 🤖📉 <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/TechNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TechNews</span></a> <a href="https://mastodon.social/tags/LlamaAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LlamaAI</span></a> <a href="https://mastodon.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.social/tags/Benchmarking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Benchmarking</span></a> <a href="https://mastodon.social/tags/AIResearch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIResearch</span></a></p>
N-gated Hacker News<p>🚀 Behold, yet another mind-blowing reinvention of the <a href="https://mastodon.social/tags/benchmarking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>benchmarking</span></a> wheel, this time bedazzled with <a href="https://mastodon.social/tags/Go" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Go</span></a> glitter! 🌟 Perfect for those who need to benchmark how quickly they can waste their time in the <a href="https://mastodon.social/tags/CLI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CLI</span></a>. 🤖🔧<br><a href="https://github.com/ConduitIO/benchi" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">github.com/ConduitIO/benchi</span><span class="invisible"></span></a> <a href="https://mastodon.social/tags/glitter" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>glitter</span></a> <a href="https://mastodon.social/tags/innovation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>innovation</span></a> <a href="https://mastodon.social/tags/tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>tech</span></a> <a href="https://mastodon.social/tags/news" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>news</span></a> <a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/ngated" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ngated</span></a></p>
Xavier B.<p><span class="h-card" translate="no"><a href="https://mastodon.social/@mariejulien" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>mariejulien</span></a></span> À mon avis tu n'as pas encore trouvé ton PMF (Pouët/Market Fit).</p><p><a href="https://boitam.eu/tags/knowYourAudience" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>knowYourAudience</span></a> <a href="https://boitam.eu/tags/benchmarking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>benchmarking</span></a> <a href="https://boitam.eu/tags/notInKansasAnymore" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>notInKansasAnymore</span></a></p>
Joseph Simons 🍁 🌱<p>"In its <a href="https://mstdn.ca/tags/Municipal" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Municipal</span></a> <a href="https://mstdn.ca/tags/Benchmarking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Benchmarking</span></a> 2024 Study, the <a href="https://mstdn.ca/tags/CanadianHomeBuildersAssociation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CanadianHomeBuildersAssociation</span></a> has ranked <a href="https://mstdn.ca/tags/Edmonton" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Edmonton</span></a> as the most builder-friendly city in <a href="https://mstdn.ca/tags/Canada" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Canada</span></a> for the second straight year. Edmonton ranked sixth for planning features, fourth for approval time, second for high-rise fees, and sixth for low-rise government fees."</p><p><a href="https://www.chba.ca/assets/pdf/CHBA+Municipal+Benchmarking+Study-3rd+Edition-2024/?utm_source=Taproot+Edmonton&amp;utm_campaign=11a28a527e-TAPROOTYEG_PULSE_2025_03_27&amp;utm_medium=email&amp;utm_term=0_ef1adf0932-11a28a527e-438152299&amp;mc_cid=11a28a527e&amp;mc_eid=2af62197a9" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">chba.ca/assets/pdf/CHBA+Munici</span><span class="invisible">pal+Benchmarking+Study-3rd+Edition-2024/?utm_source=Taproot+Edmonton&amp;utm_campaign=11a28a527e-TAPROOTYEG_PULSE_2025_03_27&amp;utm_medium=email&amp;utm_term=0_ef1adf0932-11a28a527e-438152299&amp;mc_cid=11a28a527e&amp;mc_eid=2af62197a9</span></a></p>
C++Now<p>C++Now 2025 SESSION ANNOUNCEMENT: Explore microbenchmark With beman.inplace_vector by River Wu</p><p><a href="https://schedule.cppnow.org/session/2025/explore-microbenchmark-with-beman-inplace_vector/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">schedule.cppnow.org/session/20</span><span class="invisible">25/explore-microbenchmark-with-beman-inplace_vector/</span></a></p><p>Register now at <a href="https://cppnow.org/registration/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">cppnow.org/registration/</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/benchmarking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>benchmarking</span></a> <a href="https://mastodon.social/tags/cplusplus" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cplusplus</span></a> <a href="https://mastodon.social/tags/cpp" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cpp</span></a> <a href="https://mastodon.social/tags/cpp26" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cpp26</span></a></p>
B166IR<p><a href="https://youtu.be/J4qwuCXyAcU" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">youtu.be/J4qwuCXyAcU</span><span class="invisible"></span></a></p><p>In this video, Ollama vs. LM Studio (GGUF), showing that their performance is quite similar, with LM Studio’s tok/sec output used for consistent benchmarking.</p><p>What’s even more impressive? The Mac Studio M3 Ultra pulls under 200W during inference with the Q4 671B R1 model. That’s quite amazing for such performance!</p><p><a href="https://k2pk.com/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://k2pk.com/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://k2pk.com/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://k2pk.com/tags/Ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ollama</span></a> <a href="https://k2pk.com/tags/LMStudio" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LMStudio</span></a> <a href="https://k2pk.com/tags/GGUF" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GGUF</span></a> <a href="https://k2pk.com/tags/MLX" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MLX</span></a> <a href="https://k2pk.com/tags/TechReview" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TechReview</span></a> <a href="https://k2pk.com/tags/Benchmarking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Benchmarking</span></a> <a href="https://k2pk.com/tags/MacStudio" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MacStudio</span></a> <a href="https://k2pk.com/tags/M3Ultra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>M3Ultra</span></a> <a href="https://k2pk.com/tags/LocalLLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LocalLLM</span></a> <a href="https://k2pk.com/tags/AIbenchmarks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIbenchmarks</span></a> <a href="https://k2pk.com/tags/EnergyEfficient" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EnergyEfficient</span></a> <a href="https://k2pk.com/tags/linux" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linux</span></a></p>
HGPU group<p>A Microbenchmark Framework for Performance Evaluation of OpenMP Target Offloading</p><p><a href="https://mast.hpc.social/tags/OpenMP" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenMP</span></a> <a href="https://mast.hpc.social/tags/Benchmarking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Benchmarking</span></a> <a href="https://mast.hpc.social/tags/Performance" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Performance</span></a> <a href="https://mast.hpc.social/tags/Package" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Package</span></a></p><p><a href="https://hgpu.org/?p=29809" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=29809</span><span class="invisible"></span></a></p>
Habr<p>[Перевод] Оценка больших языковых моделей в 2025 году: пять методов</p><p>Большие языковые модели (LLM) в последнее время стремительно развиваются и несут в себе потенциал для кардинального преобразования ИИ. Точная оценка моделей LLM крайне важна, поскольку: • Компании должны выбирать генеративные AI-модели для внедрения в работу. Базовых моделей LLM сейчас множество, и для каждой есть различные их модификации. • После выбора модели будет проходить fine-tuning. И если производительность модели не измерена с достаточной точностью, пользователи не смогут оценить эффективность своих усилий. Таким образом, необходимо определить: • Оптимальные методы оценки моделей • Подходящий тип данных для обучения и тестирования моделей Поскольку оценка систем LLM является многомерной задачей, важно разработать комплексную методологию измерения их производительности. В этой статье рассматриваются основные проблемы существующих методов оценки и предлагаются решения для их устранения.</p><p><a href="https://habr.com/ru/articles/887290/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">habr.com/ru/articles/887290/</span><span class="invisible"></span></a></p><p><a href="https://zhub.link/tags/llm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llm</span></a> <a href="https://zhub.link/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://zhub.link/tags/benchmarking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>benchmarking</span></a> <a href="https://zhub.link/tags/finetuning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>finetuning</span></a> <a href="https://zhub.link/tags/bleu" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>bleu</span></a> <a href="https://zhub.link/tags/rouge" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>rouge</span></a> <a href="https://zhub.link/tags/%D0%B1%D0%B5%D0%BD%D1%87%D0%BC%D0%B0%D1%80%D0%BA%D0%B8%D0%BD%D0%B3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>бенчмаркинг</span></a></p>
Andrew Jones (hpcnotes)<p>UK based <a href="https://mast.hpc.social/tags/HPC" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HPC</span></a> benchmarking role at Microsoft</p><p>Requires real experience with hands on HPC <a href="https://mast.hpc.social/tags/benchmarking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>benchmarking</span></a> - porting, compiling, tuning, performance analysis etc. of scientific codes on HPC systems</p><p><a href="https://buff.ly/fKfQz6j" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">buff.ly/fKfQz6j</span><span class="invisible"></span></a></p>
Habr<p>[Перевод] Бенчмаркинг AI-агентов: оценка производительности в реальных задачах</p><p>AI-агенты уже решают реальные задачи — от обслуживания клиентов до сложной аналитики данных. Но как убедиться, что они действительно эффективны? Ответ заключается в комплексной оценке AI-агентов. Чтобы AI-система была надежной и последовательной, важно понимать типы AI-агентов и уметь их правильно оценивать. Для этого используются продвинутые методики и проверенные фреймворки оценки AI-агентов. В этой статье мы рассмотрим ключевые метрики, лучшие практики и основные вызовы, с которыми сталкиваются компании при оценке AI-агентов в корпоративных средах.</p><p><a href="https://habr.com/ru/articles/886198/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">habr.com/ru/articles/886198/</span><span class="invisible"></span></a></p><p><a href="https://zhub.link/tags/ai_agent" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai_agent</span></a> <a href="https://zhub.link/tags/benchmarking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>benchmarking</span></a> <a href="https://zhub.link/tags/%D0%B8%D0%B8_%D0%B0%D0%B3%D0%B5%D0%BD%D1%82%D1%8B" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ии_агенты</span></a> <a href="https://zhub.link/tags/%D0%B1%D0%B5%D0%BD%D1%87%D0%BC%D0%B0%D1%80%D0%BA%D0%B8%D0%BD%D0%B3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>бенчмаркинг</span></a> <a href="https://zhub.link/tags/llm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llm</span></a></p>
Wizards Anonymous<p>Curious which <a href="https://mastodon.social/tags/OpenSource" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenSource</span></a> options <a href="https://mastodon.social/tags/Wizards" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Wizards</span></a> prefer to utilize for <a href="https://mastodon.social/tags/Benchmarking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Benchmarking</span></a> <a href="https://mastodon.social/tags/Disk" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Disk</span></a> / <a href="https://mastodon.social/tags/SSD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SSD</span></a>. :)</p>
HGPU group<p>Evaluating the Performance of the DeepSeek Model in Confidential Computing Environment</p><p><a href="https://mast.hpc.social/tags/Security" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Security</span></a> <a href="https://mast.hpc.social/tags/DeepSeek" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DeepSeek</span></a> <a href="https://mast.hpc.social/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> <a href="https://mast.hpc.social/tags/Cloud" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Cloud</span></a> <a href="https://mast.hpc.social/tags/Performance" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Performance</span></a> <a href="https://mast.hpc.social/tags/Benchmarking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Benchmarking</span></a></p><p><a href="https://hgpu.org/?p=29782" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=29782</span><span class="invisible"></span></a></p>
Andrew Jones (hpcnotes)<p>Olga Pearce from LLNL giving a talk on <a href="https://mast.hpc.social/tags/benchmarking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>benchmarking</span></a> for <a href="https://mast.hpc.social/tags/HPC" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HPC</span></a> at <a href="https://mast.hpc.social/tags/MW25NZ" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MW25NZ</span></a></p><p>Proposing a specification for running HPC benchmarks - benchpark - to help automation, reuse, reproducibility, tracking, etc.</p>
Jeff Fortin T.<p>The rabbithole investigation of Nautilus' very slow cold-disk-cache folders loading performance continued this week end.<br>Latest findings here: <a href="https://gitlab.gnome.org/GNOME/nautilus/-/issues/3374#note_2345406" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">gitlab.gnome.org/GNOME/nautilu</span><span class="invisible">s/-/issues/3374#note_2345406</span></a></p><p><a href="https://mastodon.social/tags/GNOMEFiles" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GNOMEFiles</span></a> <a href="https://mastodon.social/tags/Nautilus" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Nautilus</span></a> <a href="https://mastodon.social/tags/GNOME" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GNOME</span></a> <a href="https://mastodon.social/tags/performance" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>performance</span></a> <a href="https://mastodon.social/tags/Sysprof" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Sysprof</span></a> <a href="https://mastodon.social/tags/benchmarking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>benchmarking</span></a> <a href="https://mastodon.social/tags/filesystems" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>filesystems</span></a></p>
James Young<p>Surely someone's looked into this: if I wanted to store millions or billions of files on a filesystem, I wouldn't store them in one single subdirectory / folder. I'd split them up into nested folders, so each folder held, say, 100 or 1000 or n files or folders. What's the optimum n for filesystems, for performance or space? <br>I've idly pondered how to experimentally gather some crude statistics, but it feels like I'm just forgetting to search some obvious keywords. <br><a href="https://mefi.social/tags/BillionFileFS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BillionFileFS</span></a> <a href="https://mefi.social/tags/linux" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linux</span></a> <a href="https://mefi.social/tags/filesystems" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>filesystems</span></a> <a href="https://mefi.social/tags/optimization" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>optimization</span></a> <a href="https://mefi.social/tags/benchmarking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>benchmarking</span></a></p>
HGPU group<p>Thesis: Modernization and Optimization of MPI Codes</p><p><a href="https://mast.hpc.social/tags/MPI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MPI</span></a> <a href="https://mast.hpc.social/tags/OpenMP" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenMP</span></a> <a href="https://mast.hpc.social/tags/Benchmarking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Benchmarking</span></a> <a href="https://mast.hpc.social/tags/Performance" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Performance</span></a> <a href="https://mast.hpc.social/tags/Package" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Package</span></a></p><p><a href="https://hgpu.org/?p=29718" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=29718</span><span class="invisible"></span></a></p>
Stefan Marr<p>Our benchmarking tool got a new release, ReBench 1.3</p><p>Important changes:<br> - better support for environment variables<br> - more predictable handling of build commands<br> - support for machine-specific settings<br> - tool to reduce measurement noise is more robust</p><p><a href="https://github.com/smarr/ReBench/releases/tag/v1.3.0" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">github.com/smarr/ReBench/relea</span><span class="invisible">ses/tag/v1.3.0</span></a></p><p><a href="https://mastodon.acm.org/tags/benchmarking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>benchmarking</span></a> <a href="https://mastodon.acm.org/tags/languageImplementation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>languageImplementation</span></a> <a href="https://mastodon.acm.org/tags/experiments" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>experiments</span></a> <a href="https://mastodon.acm.org/tags/science" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>science</span></a></p>
Microsoft DevBlogs<p>Join the conversation and optimize your projects! </p><p><a href="https://dotnet.social/tags/VisualStudio" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>VisualStudio</span></a> <a href="https://dotnet.social/tags/Benchmarking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Benchmarking</span></a> <a href="https://dotnet.social/tags/PerformanceOptimization" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PerformanceOptimization</span></a></p><p>This thread was auto-generated from the original post, which can be found here: <a href="https://devblogs.microsoft.com/visualstudio/benchmarking-with-visual-studio-profiler/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">devblogs.microsoft.com/visuals</span><span class="invisible">tudio/benchmarking-with-visual-studio-profiler/</span></a>.</p>
Leiden Madtrics<p>📢 New blogpost!</p><p>Benchmarking - an appropriate method for evaluating research units? Thed van Leeuwen and Frank van Vree explore possibilities and caveats, particularly in the context of the Dutch Strategy Evaluation Protocol (SEP).</p><p>You can read the bi-lingual post here:<br>𝘌𝘕𝘎 👉 <a href="https://www.leidenmadtrics.nl/articles/benchmarking-in-research-evaluations-we-can-do-without-it" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">leidenmadtrics.nl/articles/ben</span><span class="invisible">chmarking-in-research-evaluations-we-can-do-without-it</span></a><br>𝘕𝘓 👉 <a href="https://www.leidenmadtrics.nl/articles/benchmarking-bij-onderzoeksevaluaties-we-kunnen-zonder" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">leidenmadtrics.nl/articles/ben</span><span class="invisible">chmarking-bij-onderzoeksevaluaties-we-kunnen-zonder</span></a></p><p>**<a href="https://social.cwts.nl/tags/benchmarking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>benchmarking</span></a>** **<a href="https://social.cwts.nl/tags/ResearchEvaluation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ResearchEvaluation</span></a>**</p>
Christos Argyropoulos MD, PhD<p>5 of these methods can leverage multithreaded (MT) <a href="https://mstdn.science/tags/BLAS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BLAS</span></a> with a sweet spot ~ 6 threads for the 40% of the time spent in MT regions. E5-2697 has 36/72 (physical/logical) cores, so the avg case scenario is one in which 0.4x3x6 cores +2 (serial methods) tie up ~ 9.2 cores ~13% of the 72 logical cores. So far the back of envelope calculation, i.e. if I run 5 out of the 2100 design points in parallel, I will stay within 15% of resource use is holding rather well! <a href="https://mstdn.science/tags/benchmarking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>benchmarking</span></a> <a href="https://mstdn.science/tags/hpc" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>hpc</span></a> <a href="https://mstdn.science/tags/rstats" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>rstats</span></a></p>