<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>research on Shiryu NAKANO</title><link>https://shiryu-nakano.github.io/posts/research/</link><description>Recent content in research on Shiryu NAKANO</description><generator>Hugo</generator><language>ja</language><atom:link href="https://shiryu-nakano.github.io/posts/research/index.xml" rel="self" type="application/rss+xml"/><item><title/><link>https://shiryu-nakano.github.io/posts/research/materials/ai-safty/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://shiryu-nakano.github.io/posts/research/materials/ai-safty/</guid><description>&lt;div id="outline-container-headline-1" class="outline-2"&gt;
&lt;h2 id="headline-1"&gt;
ref
&lt;/h2&gt;
&lt;div id="outline-text-headline-1" class="outline-text-2"&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/abs/1711.10561"&gt;[1711.10561] Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/HayatoFujihara/awesome-ai-red-teaming-jp"&gt;https://github.com/HayatoFujihara/awesome-ai-red-teaming-jp&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;</description></item><item><title>Org-mode Test</title><link>https://shiryu-nakano.github.io/posts/research/notes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://shiryu-nakano.github.io/posts/research/notes/</guid><description>&lt;div id="outline-container-headline-1" class="outline-2"&gt;
&lt;h2 id="headline-1"&gt;
Overview
&lt;/h2&gt;
&lt;div id="outline-text-headline-1" class="outline-text-2"&gt;
&lt;p&gt;研究でも何回も出てくる資料とか読み直してる資料のリンクや、理解した内容をまとめる場所&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div id="outline-container-headline-2" class="outline-2"&gt;
&lt;h2 id="headline-2"&gt;
Math Test
&lt;/h2&gt;
&lt;div id="outline-text-headline-2" class="outline-text-2"&gt;
&lt;p&gt;
Inline math: $E = mc^2$&lt;/p&gt;
&lt;p&gt;
Display math:&lt;/p&gt;
&lt;p&gt;
$$
\int_0^\infty e^{-x^2} dx = \frac{\sqrt{\pi}}{2}
$$&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div id="outline-container-headline-3" class="outline-2"&gt;
&lt;h2 id="headline-3"&gt;
Code Block
&lt;/h2&gt;
&lt;div id="outline-text-headline-3" class="outline-text-2"&gt;
&lt;div class="src src-python"&gt;
&lt;div class="highlight"&gt;&lt;div style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;
&lt;table style="border-spacing:0;padding:0;margin:0;border:0;"&gt;&lt;tr&gt;&lt;td style="vertical-align:top;padding:0;margin:0;border:0;"&gt;
&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code&gt;&lt;span style="white-space:pre;-webkit-user-select:none;user-select:none;margin-right:0.4em;padding:0 0.4em 0 0.4em;color:#7f7f7f"&gt;1
&lt;/span&gt;&lt;span style="white-space:pre;-webkit-user-select:none;user-select:none;margin-right:0.4em;padding:0 0.4em 0 0.4em;color:#7f7f7f"&gt;2
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td style="vertical-align:top;padding:0;margin:0;border:0;;width:100%"&gt;
&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#66d9ef"&gt;def&lt;/span&gt; &lt;span style="color:#a6e22e"&gt;hello&lt;/span&gt;():
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; print(&lt;span style="color:#e6db74"&gt;&amp;#34;Hello from org-mode!&amp;#34;&lt;/span&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div id="outline-container-headline-4" class="outline-2"&gt;
&lt;h2 id="headline-4"&gt;
List
&lt;/h2&gt;
&lt;div id="outline-text-headline-4" class="outline-text-2"&gt;
&lt;ul&gt;
&lt;li&gt;Item 1&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Item 2&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Nested item&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Item 3&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
[メンタル的なところ](&lt;em&gt;posts/research/attitude&lt;/em&gt;)&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;</description></item><item><title>Org-mode Test</title><link>https://shiryu-nakano.github.io/posts/research/org-test/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://shiryu-nakano.github.io/posts/research/org-test/</guid><description>&lt;div id="outline-container-headline-1" class="outline-2"&gt;
&lt;h2 id="headline-1"&gt;
Overview
&lt;/h2&gt;
&lt;div id="outline-text-headline-1" class="outline-text-2"&gt;
&lt;p&gt;
This is a test post written in org-mode.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div id="outline-container-headline-2" class="outline-2"&gt;
&lt;h2 id="headline-2"&gt;
Math Test
&lt;/h2&gt;
&lt;div id="outline-text-headline-2" class="outline-text-2"&gt;
&lt;p&gt;
Inline math: $E = mc^2$&lt;/p&gt;
&lt;p&gt;
Display math:&lt;/p&gt;
&lt;p&gt;
$$
\int_0^\infty e^{-x^2} dx = \frac{\sqrt{\pi}}{2}
$$&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div id="outline-container-headline-3" class="outline-2"&gt;
&lt;h2 id="headline-3"&gt;
Code Block
&lt;/h2&gt;
&lt;div id="outline-text-headline-3" class="outline-text-2"&gt;
&lt;div class="src src-python"&gt;
&lt;div class="highlight"&gt;&lt;div style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;
&lt;table style="border-spacing:0;padding:0;margin:0;border:0;"&gt;&lt;tr&gt;&lt;td style="vertical-align:top;padding:0;margin:0;border:0;"&gt;
&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code&gt;&lt;span style="white-space:pre;-webkit-user-select:none;user-select:none;margin-right:0.4em;padding:0 0.4em 0 0.4em;color:#7f7f7f"&gt;1
&lt;/span&gt;&lt;span style="white-space:pre;-webkit-user-select:none;user-select:none;margin-right:0.4em;padding:0 0.4em 0 0.4em;color:#7f7f7f"&gt;2
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td style="vertical-align:top;padding:0;margin:0;border:0;;width:100%"&gt;
&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#66d9ef"&gt;def&lt;/span&gt; &lt;span style="color:#a6e22e"&gt;hello&lt;/span&gt;():
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; print(&lt;span style="color:#e6db74"&gt;&amp;#34;Hello from org-mode!&amp;#34;&lt;/span&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div id="outline-container-headline-4" class="outline-2"&gt;
&lt;h2 id="headline-4"&gt;
List
&lt;/h2&gt;
&lt;div id="outline-text-headline-4" class="outline-text-2"&gt;
&lt;ul&gt;
&lt;li&gt;Item 1&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Item 2&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Nested item&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Item 3&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src="../../../static/images/posts/research/20260419_123420.png" alt="../../../static/images/posts/research/20260419_123420.png" title="../../../static/images/posts/research/20260419_123420.png" /&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src="../../../static/images/posts/research/20260419_134538.png" alt="../../../static/images/posts/research/20260419_134538.png" title="../../../static/images/posts/research/20260419_134538.png" /&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;</description></item><item><title>Paper To Read</title><link>https://shiryu-nakano.github.io/posts/research/paper_stock/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://shiryu-nakano.github.io/posts/research/paper_stock/</guid><description>&lt;ul&gt;
&lt;li class="unchecked"&gt;&lt;a href="https://arxiv.org/abs/2306.11922"&gt;[2306.11922] No Wrong Turns: The Simple Geometry Of Neural Networks Optimization Paths&lt;/a&gt;&lt;/li&gt;
&lt;li class="unchecked"&gt;&lt;a href="https://arxiv.org/abs/2603.27432"&gt;[2603.27432] The Geometric Cost of Normalization: Affine Bounds on the Bayesian Complexity of Neural Networks&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div id="outline-container-headline-1" class="outline-3"&gt;
&lt;h3 id="headline-1"&gt;
DSB
&lt;/h3&gt;
&lt;div id="outline-text-headline-1" class="outline-text-3"&gt;
&lt;ul&gt;
&lt;li class="unchecked"&gt;&lt;a href="https://arxiv.org/abs/2410.19637"&gt;[2410.19637] A distributional simplicity bias in the learning dynamics of transformers&lt;/a&gt;&lt;/li&gt;
&lt;li class="checked"&gt;
&lt;p&gt;&lt;a href="https://scholar.google.com/scholar?start=20&amp;amp;hl=ja&amp;amp;as_sdt=2005&amp;amp;sciodt=0,5&amp;amp;cites=5568799983092346925&amp;amp;scipsc="&gt;Belrose: Neural networks learn statistics of increasing… - Google Scholar&lt;/a&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;当該論文を引用している論文調べ&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li class="unchecked"&gt;&lt;a href="https://arxiv.org/abs/2603.12901"&gt;[2603.12901] A theory of learning data statistics in diffusion models, from easy to hard&lt;/a&gt;&lt;/li&gt;
&lt;li class="unchecked"&gt;&lt;a href="https://arxiv.org/abs/2510.04285"&gt;[2510.04285] Probing Geometry of Next Token Prediction Using Cumulant Expansion of the Softmax Entropy&lt;/a&gt;&lt;/li&gt;
&lt;li class="unchecked"&gt;&lt;a href="https://arxiv.org/abs/2602.12257"&gt;[2602.12257] On the implicit regularization of Langevin dynamics with projected noise&lt;/a&gt;&lt;/li&gt;
&lt;li class="unchecked"&gt;&lt;a href="https://openreview.net/forum?id=CPKMwyiyDv"&gt;Neural networks trained with SGD learn distributions of increasing complexity | OpenReview&lt;/a&gt;&lt;/li&gt;
&lt;li class="unchecked"&gt;&lt;a href="https://proceedings.neurips.cc/paper/2019/hash/b432f34c5a997c8e7c806a895ecc5e25-Abstract.html"&gt;SGD on Neural Networks Learns Functions of Increasing Complexity&lt;/a&gt;&lt;/li&gt;
&lt;li class="unchecked"&gt;&lt;a href="https://arxiv.org/abs/1805.08522"&gt;[1805.08522] Deep learning generalizes because the parameter-function map is biased towards simple functions&lt;/a&gt;&lt;/li&gt;
&lt;li class="unchecked"&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div id="outline-container-headline-2" class="outline-3"&gt;
&lt;h3 id="headline-2"&gt;
SLT for Deep Learning
&lt;/h3&gt;
&lt;div id="outline-text-headline-2" class="outline-text-3"&gt;
&lt;ul&gt;
&lt;li class="unchecked"&gt;&lt;a href="https://www.lesswrong.com/posts/6g8cAftfQufLmFDYT/you-re-measuring-model-complexity-wrong"&gt;You&amp;#39;re Measuring Model Complexity Wrong — LessWrong&lt;/a&gt;&lt;/li&gt;
&lt;li class="unchecked"&gt;&lt;a href="https://arxiv.org/abs/2410.02984"&gt;[2410.02984] Differentiation and Specialization of Attention Heads via the Refined Local Learning Coefficient&lt;/a&gt;&lt;/li&gt;
&lt;li class="unchecked"&gt;&lt;a href="https://arxiv.org/abs/2511.04564"&gt;[2511.04564] Uncertainties in Physics-informed Inverse Problems: The Hidden Risk in Scientific AI&lt;/a&gt;&lt;/li&gt;
&lt;li class="unchecked"&gt;&lt;a href="https://arxiv.org/abs/2406.10234"&gt;[2406.10234] Review and Prospect of Algebraic Research in Equivalent Framework between Statistical Mechanics and Machine Learning Theory&lt;/a&gt;&lt;/li&gt;
&lt;li class="unchecked"&gt;
&lt;p&gt;Computational Complexity of Learning Neural Networks: Smoothness and Degeneracy&lt;/p&gt;</description></item><item><title>PINNs</title><link>https://shiryu-nakano.github.io/posts/research/materials/pinns/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://shiryu-nakano.github.io/posts/research/materials/pinns/</guid><description>&lt;div id="outline-container-headline-1" class="outline-2"&gt;
&lt;h2 id="headline-1"&gt;
Overview
&lt;/h2&gt;
&lt;div id="outline-text-headline-1" class="outline-text-2"&gt;
&lt;p&gt;かけるほどわかってない&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div id="outline-container-headline-2" class="outline-2"&gt;
&lt;h2 id="headline-2"&gt;
ref
&lt;/h2&gt;
&lt;div id="outline-text-headline-2" class="outline-text-2"&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;PINNsの元の論文&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations&lt;/li&gt;
&lt;li&gt;ローカルにあるので優先的に読む
/Users/nkn4ryu/Downloads/1-s2.0-S0021999118307125-main.pdf&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/abs/2602.11097"&gt;[2602.11097] Statistical Learning Analysis of Physics-Informed Neural Networks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/abs/2211.08064"&gt;[2211.08064] Physics-Informed Machine Learning: A Survey on Problems, Methods and Applications&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sciencedirect.com/science/article/pii/S0893608026003217"&gt;An efficient wavelet-based physics-informed neural network for multiscale problems - ScienceDirect&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.themoonlight.io/en/review/advancing-generalization-in-pinns-through-latent-space-representations"&gt;[Literature Review] Advancing Generalization in PINNs through Latent-Space Representations&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/abs/1711.10561"&gt;[1711.10561] Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;</description></item><item><title>Scaling Laws in Deep Learning</title><link>https://shiryu-nakano.github.io/posts/research/scaling-laws/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://shiryu-nakano.github.io/posts/research/scaling-laws/</guid><description>&lt;h2 id="overview"&gt;Overview&lt;a class="anchor" href="#overview"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Scaling laws describe how model performance changes with compute, data, and parameters.&lt;/p&gt;
&lt;h2 id="key-papers"&gt;Key Papers&lt;a class="anchor" href="#key-papers"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;${p(s)}$&lt;/p&gt;
&lt;p&gt;${\sigma}$&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Kaplan et al. (2020): Scaling Laws for Neural Language Models&lt;/li&gt;
&lt;li&gt;Hoffmann et al. (2022): Chinchilla - Training Compute-Optimal LLMs&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="observations"&gt;Observations&lt;a class="anchor" href="#observations"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Loss scales as power law with model size, data size, and compute&lt;/li&gt;
&lt;li&gt;Optimal allocation of compute between model size and data&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>Singular Learning Theory (SLT)</title><link>https://shiryu-nakano.github.io/posts/research/slt/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://shiryu-nakano.github.io/posts/research/slt/</guid><description>&lt;h2 id="what-is-slt"&gt;What is SLT?&lt;a class="anchor" href="#what-is-slt"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Singular Learning Theory (SLT) is a mathematical framework developed by Sumio Watanabe for analyzing statistical learning when the model is singular (non-regular).&lt;/p&gt;
&lt;h2 id="key-concepts"&gt;Key Concepts&lt;a class="anchor" href="#key-concepts"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;RLCT (Real Log Canonical Threshold)&lt;/strong&gt;: A key quantity that determines generalization error&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Free Energy&lt;/strong&gt;: Measures model complexity in Bayesian inference&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Singular Models&lt;/strong&gt;: Models where the Fisher information matrix is degenerate&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="references"&gt;References&lt;a class="anchor" href="#references"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Watanabe, S. (2009). Algebraic Geometry and Statistical Learning Theory&lt;/li&gt;
&lt;li&gt;Watanabe, S. (2018). Mathematical Theory of Bayesian Statistics&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>Tips for Attitude</title><link>https://shiryu-nakano.github.io/posts/research/attitude/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://shiryu-nakano.github.io/posts/research/attitude/</guid><description>&lt;p&gt;このページは随時更新・追記されて行きます．&lt;/p&gt;
&lt;p&gt;研究や勉強・仕事について特に心構えで助けてもらったことのある記事書籍を記録していく．&lt;/p&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;数学系
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://note.com/taketo1024/n/n2c3f1fa716ab"&gt;数学者を目指す&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.ms.u-tokyo.ac.jp/~yasuyuki/sem.htm"&gt;How to prepare for seminars&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=WEmVYvijaMc"&gt;https://www.youtube.com/watch?v=WEmVYvijaMc&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="研究関連"&gt;研究関連&lt;a class="anchor" href="#%e7%a0%94%e7%a9%b6%e9%96%a2%e9%80%a3"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://joisino.hatenablog.com/entry/2023/10/29/164650#%E7%A0%94%E7%A9%B6%E3%83%86%E3%83%BC%E3%83%9E%E3%81%AE%E6%B1%BA%E3%82%81%E6%96%B9%E3%81%AB%E3%81%A4%E3%81%84%E3%81%A6"&gt;君たちはどう研究するか - ｼﾞｮｲｼﾞｮｲｼﾞｮｲ&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://speakerdeck.com/joisino/randomness?slide=55"&gt;研究の進め方 ランダムネスとの付き合い方について - Speaker Deck&lt;/a&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://drive.google.com/file/d/1dLYZfxCEJWq8v9tQIfKJBOvPU998lLrS/view?usp=drive_link"&gt;バックアップ&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/takahashihiroshi/takahashihiroshi.github.io/blob/master/contents/for_ml_beginners.md"&gt;機械学習の研究者を目指す人へ&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://joisino.hatenablog.com/entry/2023/04/10/170519"&gt;論文読みの日課について - ｼﾞｮｲｼﾞｮｲｼﾞｮｲ&lt;/a&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://psearch.joisino.net/"&gt;Paper Search from Venues&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/Kei18/awesome_cs-ja_phd_life?tab=readme-ov-file"&gt;Awesome CS-Ja PhD Life&lt;/a&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;blockquote class='book-hint '&gt;
&lt;p&gt;博士課程をサバイブするために参考になりそう/モチベになりそうな記事や資料のリスト。 日本語で読めるCS関連のものが中心です。 卒修論に役立つものも掲載しますし、その後のキャリアの話も掲載します。&lt;/p&gt;
&lt;/blockquote&gt;&lt;/li&gt;
&lt;li&gt;この記事にあるmdを一つのnotion DBか何かにして，全て読み終わるまでランダムでリコメンドしてもいいかもしれない．&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://psearch.joisino.net/"&gt;https://psearch.joisino.net/&lt;/a&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Paper Search from Venues&lt;/li&gt;
&lt;li&gt;有名なやつ（古典）から読む&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="事前準備"&gt;事前準備&lt;a class="anchor" href="#%e4%ba%8b%e5%89%8d%e6%ba%96%e5%82%99"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;勉強は好きなので，趣味としていろんな本を読みたい．&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/takahashihiroshi/takahashihiroshi.github.io/blob/master/contents/for_ml_beginners.md"&gt;https://github.com/takahashihiroshi/takahashihiroshi.github.io/blob/master/contents/for_ml_beginners.md&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://weblab.t.u-tokyo.ac.jp/lecture/learning-roadmap/"&gt;人工知能を学ぶためのロードマップ - 東京大学松尾・岩澤研究室（松尾研）- Matsuo Lab&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://qiita.com/ssugasawa/items/0e0d76de3ed92c7410e6"&gt;ベイズ統計学を勉強する参考書のフロー #データサイエンス - Qiita&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://qiita.com/kueda_cs/items/28008db6491c71ac5659"&gt;統計・機械学習の理論を学ぶ手順 #数学 - Qiita&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://seasawher.hatenablog.com/entry/2020/04/25/175335"&gt;大学数学の文献案内 - 数論幾何の理解を目指して - - パンの木を植えて&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://qiita.com/takaaki_inada/items/5f8f505be2945137d191"&gt;ITエンジニアのための機械学習理論入門読了者が Kaggle やってみた #Python - Qiita&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;/li&gt;
&lt;/ul&gt;</description></item></channel></rss>