GRE写作
Published:
GRE写作部分整理,包括写作词汇和作文模板
Published:
GRE写作部分整理,包括写作词汇和作文模板
Published:
GRE同义词、近义词整理
Published:
GRE备考指南,总结了笔者的GRE备考经验
Published:
GRE熟词僻义整理
Published:
Paper Reading: One Step of Gradient Descent is Provably the Optimal In-Context Learner with One Layer of Linear Self-Attention.
Published:
Paper reading: In-context Learning with Transformer Is Really Equivalent to a Contrastive Learning Pattern.
Published:
Paper Reading: [FOCS 2015] A time-space lower bound for a large class of learning problems.
Published:
Paper Reading: Optimal rates for zero-order convex optimization: the power of two function evaluations
Published:
Paper Reading: Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization
Published:
Paper Reading: Adaptive extra-gradient methods for min-max optimization and games. [ICLR 21]
Published:
Paper Reading: Stochastic Variance Reduction for Variational Inequality Methods.
Published:
Paper Reading: DoWG Unleashed: An Efficient Universal Parameter-Free Gradient Descent Method.
Published:
Paper Reading: Adaptive Gradient Descent without Descent.
Published:
考虑使用数值方法求解线性方程组 $A x= b$.
Published:
Paper Reading: On Penalty Methods for Nonconvex Bilevel Optimization and First-Order Stochastic Approximation.
Published:
Paper Reading: Towards Gradient-based Bilevel Optimization with Non-convex Followers and Beyond
Published:
Paper Reading: A Generic First-Order Algorithmic Framework for Bi-Level Programming Beyond Lower-Level Singleton.
Published:
Paper Reading: On Gradient Descent Ascent for Nonconvex-Concave Minimax Problems
Published:
Paper Reading: Optimal Stochastic Non-smooth Non-convex Optimization through Online-to-Non-convex Conversion
Published:
本文主要讨论了关于经典、“贪心”的、随机的BFGS算法在解正定对称系统中的显式超线性收敛率。
Published:
论文阅读笔记:Katyusha X: Practical Momentum Method for Stochastic Sum-of-Nonconvex Optimization
Published:
论文阅读笔记: Spiderboost and momentum: Faster variance reduction algorithms
Published:
论文阅读笔记: Improved SVRG for non-strongly-convex or sum-of-non-convex objectives
Published:
论文阅读笔记:Stochastic variance reduction for nonconvex optimization
Published:
论文阅读笔记: Greedy and Random Quasi-Newton Methods with Faster Explicit Superlinear Convergence
Published:
Published:
论文阅读笔记:Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
Published:
论文阅读笔记:Linear Coupling: An Ultimate Unifification of Gradient and Mirror Descent
Published:
论文阅读笔记: Fast Extra Gradient Methods for Smooth Structured Nonconvex-Nonconcave Minimax Problems
Published:
论文阅读笔记: Efficient Methods for Structured Nonconvex-Nonconcave Min-Max Optimization
Published:
Published:
论文阅读笔记:PMGT-VR: A decentralized proximal-gradient algorithmic framework with variance reduction
Published:
Published:
论文阅读笔记:Don’t Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop
Published:
论文阅读笔记:A Newton-type Incremental Method with a Superlinear Convergence Rate
Published:
论文阅读笔记 Regularized Newton Method with Global $ O(\frac{1}{k^2}) $ Convergence
Published:
三次正则牛顿方法或者称为Cubic Newton方法在非凸函数和凸函数上的全局收敛性分析。
Published:
证明了对于连续可微的光滑-凸函数和光滑-强凸函数这两大类函数上一阶优化算法的复杂度下界,据此可以解释为什么Nesterov加速方法被称为最优算法。
Published:
整理了伪牛顿法的相关内容,篇幅大部分在于BFGS算法的收敛性和超线性收敛性的证明。
Published:
Nesterov对于近端梯度下降的加速方法以及详细证明。
Published:
Nesterov加速方法中的更细公式宛如神谕,此处记录Yurii Nesterov 在其 Lectures on Convex Optimization 一书中对该更新公式给出的解释,核心思想在于使用一个序列的收敛性替代函数值的收敛性。
Published:
FISTA(FAST Iterative Shrinkage-Thresholding Algorithm)是Nesterov加速算法在近端梯度下降方法上的一个特例,笔者第一次看Nesterov加速算法等的更新公式宛如神谕,可以感受到其神奇的加速过程,但不能理解其奥妙。遂撰写此文,记录FISTA的推导过程。
Published:
次梯度方法和近端梯度方法的入门级介绍,其中次梯度方法多用于不可导的凸优化问题,而近端梯度方法多用于不可导的凸正则问题。
Published:
共轭梯度方法,既是数值算法中的重要内容,也是优化的一类重要算法。本文介绍从梯度方法到共轭梯度方法的分析,着重介绍收敛性对比分析和证明过程。
Published:
论文阅读笔记:Accelerating Stochastic Gradient Descent using Predictive Variance Reduction. NIPS 2013.
Published:
论文阅读笔记:Sharpness-Aware Minimization for Efficiently Improving Generalization
Published:
论文阅读笔记:Optimizing Neural Networks with Kronecker-factored Approximate Curvature
Published:
论文阅读笔记:How Does Batch Normalization Help Optimization? NIPS’18
Published:
Paper Reading: Optimal rates for zero-order convex optimization: the power of two function evaluations
Published:
Paper Reading: Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization
Published:
Published:
论文阅读笔记:PMGT-VR: A decentralized proximal-gradient algorithmic framework with variance reduction
Published:
Published:
Paper Reading: Smooth Nash Equilibria: Algorithms and Complexity.
Published:
Paper Reading: On Penalty Methods for Nonconvex Bilevel Optimization and First-Order Stochastic Approximation.
Published:
Paper Reading: Towards Gradient-based Bilevel Optimization with Non-convex Followers and Beyond
Published:
Paper Reading: A Generic First-Order Algorithmic Framework for Bi-Level Programming Beyond Lower-Level Singleton.
Published:
Paper Reading: Adaptive extra-gradient methods for min-max optimization and games. [ICLR 21]
Published:
Paper Reading: Stochastic Variance Reduction for Variational Inequality Methods.
Published:
Published:
论文阅读笔记:DropEdge: Towards Deep Graph Convolutional Networks on Node Classification
Published:
图上所有结点对最短路的近似算法。
Published:
图的动态连通性问题。笔者对相关论文的数据结构和算法进行整理,但是只能窥得作者的妙思的一二,很多精髓仍未掌握。但将整理的内容配图放在这里,希望一来加深自己的理解,二来希望对后续读到相关工作的人有所帮助。
Published:
论文阅读笔记: Towards Computing a Near-Maximum Weighted Independent Seton Massive Graphs
Published:
论文阅读笔记:Efficient Graph Similarity Search Over Large Graph Databases
Published:
论文阅读笔记:ICDE’12 Efficient Graph Similarity Joins with Edit Distance Constraints
Published:
论文阅读笔记:Efficient Maximum Clique Computation over Large Sparse Graphs
Published:
论文阅读笔记:Efficient Computation of a Near-Maximum Independent Set over Evolving Graphs
Published:
论文阅读笔记: Computing a Near-Maximum Independent Set in Dynamic Graphs
Published:
论文阅读笔记:Computing A Near-Maximum Independent Set in LinearTime by Reducing-Peeling
Published:
论文阅读笔记:Accelerating Set Intersections over Graphs by Reducing-Merging
Published:
本文总结了最短路的几个经典算法。由于笔者在网上看到的很多文章都对算法流程做了详细的介绍,但笔者发现大部分文章对算法正确性并没有给出证明,而这些证明对于初次接触的人并不是显然的,因此笔者决定撰写该文章也增进自己学习理解。
Published:
本文整理了关于最大流,以及其特例二分图最大图匹配的算法、正确性证明、复杂度证明。
Published:
关于最小生成树的经典算法为Prim和Kruskal算法,很多文章都对其算法流程进行了详细的介绍,但对于其算法正确性的证明却常常忽略。笔者最初接触这两个算法的证明来自《算法导论》,但其中关于“安全边”等的繁琐定义和性质让笔者最初感觉不好理解,因此下面主要总结自己的理解。
Published:
DFS和BFS是最基础的图遍历算法,不论在图算法邻域还是其他各个邻域都极其重要,例如DFS中的搜索思想也是人工智能的基础。很多文章都对于DFS和BFS的直观理解做了很多解释,但对于DFS和BFS的重要性质却不够详细,因此本文主要总结了DFS和BFS的一些重要性质。
Published:
论文阅读笔记:Efficient Graph Similarity Search Over Large Graph Databases
Published:
论文阅读笔记:ICDE’12 Efficient Graph Similarity Joins with Edit Distance Constraints
Published:
Paper Reading: [FOCS 2015] A time-space lower bound for a large class of learning problems.
Published:
Paper Reading: Optimal rates for zero-order convex optimization: the power of two function evaluations
Published:
Paper Reading: Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization
Published:
以经典模型展现马尔可夫决策过程的求解,主要包括:
连续状态马尔可夫决策过程:线性二次最优控制
离散状态马尔可夫决策过程:最优停止理论
Published:
本文主要关注于使用深度学习方法进行强化学习,同样包括了基于值函数和策略函数的方法,再到演员-评论家算法等,并包含了模仿学习、逆向强化学习等的简介。
Published:
关于强化学习的入门级总结,主要包括基于值函数和策略函数的两大类方法。
Published:
Paper Reading: Denoising diffusion implicit models.
Published:
Paper Reading: Denoising diffusion probabilistic models.
Published:
考虑用数值方法求解一阶非线性常微分方程的初值问题
Published:
考虑使用数值方法求解线性方程组 $A x= b$.
Published:
曲线拟合本质上为数值逼近问题,本文将从曲线拟合的算法出发,并且分析最佳平方逼近和最佳一致逼近。
Published:
笔者在算法与数据结构、图像处理等领域都多次接触到傅里叶变换和快速傅里叶变换,但却一直未能窥其全貌,决定趁此机会好好整理。
Published:
本文包括了二分法及其变种,不动点迭代法,和基于不动点迭代法的Newton迭代法,以及Secant方法、Broyden方法等经典的伪Newton方法。
Published:
多项式插值的基本内容,包括经典的Lagrange,Neville,Newton插值方法,以及多项式重构问题以及Chebyshev插值,Hermite插值于矩阵函数的关系,样条插值以及其性质。
Published:
主要包括ARIMA的模型构建、识别、预测、估计等时间序列分析的核心内容。对于一般平稳时间序列的预测,从希尔伯特空间投影的角度给出Durbin-Levinson 算法的证明。对于一般的ARIMA模型,给出其长程预测的极限性质和基于递推公式的预测算法。
Published:
论文阅读笔记:Efficient Maximum Clique Computation over Large Sparse Graphs
Published:
Paper Reading: Two models of double descent for weak features.
Published:
论文阅读笔记: Greedy and Random Quasi-Newton Methods with Faster Explicit Superlinear Convergence
Published:
论文阅读笔记:A Newton-type Incremental Method with a Superlinear Convergence Rate
Published:
论文阅读笔记 Regularized Newton Method with Global $ O(\frac{1}{k^2}) $ Convergence
Published:
三次正则牛顿方法或者称为Cubic Newton方法在非凸函数和凸函数上的全局收敛性分析。
Published:
整理了伪牛顿法的相关内容,篇幅大部分在于BFGS算法的收敛性和超线性收敛性的证明。
Published:
论文阅读笔记: Towards Computing a Near-Maximum Weighted Independent Seton Massive Graphs
Published:
论文阅读笔记:Efficient Computation of a Near-Maximum Independent Set over Evolving Graphs
Published:
论文阅读笔记: Computing a Near-Maximum Independent Set in Dynamic Graphs
Published:
论文阅读笔记:Computing A Near-Maximum Independent Set in LinearTime by Reducing-Peeling
Published:
Paper Reading: [FOCS 2015] A time-space lower bound for a large class of learning problems.
Published:
论文阅读笔记:Sharpness-Aware Minimization for Efficiently Improving Generalization
Published:
论文阅读笔记:Optimizing Neural Networks with Kronecker-factored Approximate Curvature
Published:
论文阅读笔记:How Does Batch Normalization Help Optimization? NIPS’18
Published:
Paper Reading: Benigh Overfitting in Linear Regression.
Published:
Paper Reading: Two models of double descent for weak features.
Published:
Paper Reading: Harmless interpolation of noisy data in regression
Published:
Cramér–Rao 下界与Neyman-Pearson引理简证。
Published:
随机变量的收敛性,包括几乎处处收敛、依概率收敛、依分布收敛、依范数收敛,以及其相互之间的蕴含关系证明。
Published:
笔者对概率图模型一直充满兴趣,决定趁这个机会好好整理一下。内容主要包括贝叶斯网络、马尔可夫随机场,以及概率图上的推断和学习问题的常见算法。也是对之前学习的MCMC方法、EM算法等的一个综合性理解。
Published:
EM算法(Expectation-Maximization Algorithm)是一个十分经典且精妙的统计学习算法,在机器学习领域等具有重要的意义。
Published:
总结关于极大似然估计(Maximum Likelihood Estimate,MLE)的相关性质证明,包括同变性、一致性、渐进正态性。
Published:
蒙特卡罗方法的总结,包括拒绝性采样、重要性采样等,以及马尔可夫链蒙特卡罗方法(MCMC,Markov Chain Monte Carlo), Metropolis-Hastings采样、Gibbs采样等。
Published:
总结常用的分布和常见的假设检验方法,从特征函数推导经典的概率极限定理,之后根据常见的分布,推导包括Z检验、卡方检验、t检验、F检验、单因素方差分析等。
Published:
本文讨论基于划分机制的机器学习算法的贝叶斯一致性,主要以随机决策树和随机森林为例。
Published:
贝叶斯最优分类器是理论上最优的分类器,但由于其对应的0-1损失函数非凸不连续,难以直接进行优化,在机器学习中常用凸代理损失函数进行优化,本文研究使用代理损失函数是否能够一致地学习到贝叶斯最优分类器,称为算法的一致性。
Published:
使用增长函数和VC维衡量假设空间的复杂度,并且介绍常见分类器,如线性分类器、前馈神经网络、多数投票分类器的VC维,并且根据VC维以及概率不等式等给出关于有限维和无限维假设空间的泛化误差上界。
Published:
稳定性衡量机器学习算法对于训练集的扰动的稳健性,本文主要针对于替换稳定性,从替换稳定性的定义出发,推导稳定性和泛化性以及PAC可学性的关系,并且基于支持向量机、岭回归等经典模型推导稳定性和正则化的关系。
Published:
方差分析(Analysis of Variance,ANOVA)研究自变量的不同水平对于因变量的影响,本文首先从线性回归模型出发,给出单因素ANOVA的基础理论,再从ANOVA本身模型的角度出发,分别推导单变量和双变量的ANOVA包括假设检验等。
Published:
线性回归模型的推广,从多重共线性的解决到岭回归模型的建立,线性自平稳器的留一交叉验证的公式,再生希尔伯特空间(RKHS)与核岭回归模型的建立,以及Box-Cox变换下和对率回归模型(Logistic Regression)的建立。
Published:
线性回归入门级总结,包括线性回归模型的基本性质,假设检验和区间估计,变量选择和准则,以及对于异方差和自相关等现象的解决。
Published:
函数型数据分析针对于数据为函数的情况,而函数可以看作无穷维向量,因此函数型数据分析是有限维模型向无限维模型的推广。本文可以作为函数型数据分析的入门级导论,用于展现函数型数据分析的某些特质。
Published:
从贝叶斯最优分类器推导常见的线性分类器,包括朴素贝叶斯,线性判别分析和Fisher判别分析。
Published:
最佳低秩逼近的结果依赖于Eckart-Young-Mirsky定理,但大多数证明都略显繁琐,本文从优化的角度给出简要证明,并且推广到无限维的泛函版本。
Published:
集成学习的思想是采用若干的较弱的分类器集成为一个较强的分类器,本文主要以二分类问题为例,重点介绍AdaBoost算法的推导过程和训练误差上界。
Published:
本文关注在动态场景下或者在序列数据上的经典机器学习模型,包括Bayes滤波、Kalman滤波、隐Markov模型的前向-后向等求解算法。
Published:
矩阵填充是推荐系统中的关键问题,本文以此问题的求解为例说明矩阵次梯度等一些稀疏优化、低秩优化等的基础内容,同时也包括鲁棒PCA的介绍。
Published:
主成分分析PCA相关内容,从最简单的PCA,再到核PCA、概率PCA等。
Published:
高斯无向图模型是一种重要的概率图模型,与多元高斯分布的各种性质也密切相关。
Published:
总结机器学习中常用的概率不等式的证明,包括Chevnoff,Hoeffding,Bennett,Bernstein不等式等,并且介绍次高斯变量相关的不等式和用随机投影进行数据降维的方法。
Published:
熵相关的内容,是机器学习中重要的一部分,例如决策树的特征选择等。包括KL散度、信息熵、条件熵、互信息等。
Published:
介绍指数族分布的常见性质等,从充分统计量出发,到指数族分布的共轭先验分布和其最大熵性质,以及指数族分布在EM算法中的应用。
Published:
而本文旨在从高斯混合模型和概率潜在语义分析模型出发,展现EM算法有趣的灵魂。
Published:
图的动态连通性问题。笔者对相关论文的数据结构和算法进行整理,但是只能窥得作者的妙思的一二,很多精髓仍未掌握。但将整理的内容配图放在这里,希望一来加深自己的理解,二来希望对后续读到相关工作的人有所帮助。
Published:
从C语言程序出发,理解一个程序如何被计算机所执行,总结计算机组成原理的一些要点。本文同时也标志着折磨笔者长达很久的ICS(Introduction to Computer System)短期内告一段落。
Published:
Paper Reading: Optimal rates for zero-order convex optimization: the power of two function evaluations
Published:
Paper Reading: Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization
Published:
Paper Reading: Momentum Improves Normalized SGD
Published:
Paper Reading: Benign Overfifitting of Constant-Stepsize SGD for Linear Regression
Published:
Paper Reading: A Markov Chain Theory Approach to Characterizing the Minimax Optimality of Stochastic Gradient Descent (for Least Squares)
Published:
Paper Reading: PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization.
Published:
论文阅读笔记:Katyusha X: Practical Momentum Method for Stochastic Sum-of-Nonconvex Optimization
Published:
论文阅读笔记: Spiderboost and momentum: Faster variance reduction algorithms
Published:
论文阅读笔记: Improved SVRG for non-strongly-convex or sum-of-non-convex objectives
Published:
论文阅读笔记:Stochastic variance reduction for nonconvex optimization
Published:
Published:
论文阅读笔记:Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
Published:
Published:
论文阅读笔记:PMGT-VR: A decentralized proximal-gradient algorithmic framework with variance reduction
Published:
论文阅读笔记:Don’t Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop
Published:
论文阅读笔记:Accelerating Stochastic Gradient Descent using Predictive Variance Reduction. NIPS 2013.
Published:
马尔可夫链的相关总结。马尔可夫链通常用于刻画满足马尔可夫性的随机过程,马尔可夫性指的是一个过程未来的状态仅仅与当前的状态有关,而不依赖于历史的状态。马尔可夫链通常可以根据转移概率矩阵$P_{ij}$ 描述。
Published:
布朗运动的相关总结。布朗运动通常用来研究分子的无规律运动,模拟股票市场的波动情况等。本文主要包含了布朗运动的数字特征、样本轨道特征、布朗桥等基于布朗运动的重要随机过程、布朗运动的最大值和首中时等,不包括与随机积分相关的内容。
Published:
分支过程是一种特殊的马尔可夫链,但由于其比较特殊且在显示生活中很常见,单独考虑之。分支过程通常用于分析生物繁衍、粒子分裂等问题。
Published:
泊松过程首先是一个计数过程$N(t)$表示$t$时间内某个时间发生的次数,例如顾客访问的次数、灾难的发生次数等。
Published:
论文阅读笔记:Katyusha X: Practical Momentum Method for Stochastic Sum-of-Nonconvex Optimization
Published:
论文阅读笔记: Spiderboost and momentum: Faster variance reduction algorithms
Published:
论文阅读笔记:Stochastic variance reduction for nonconvex optimization
Published:
论文阅读笔记: Fast Extra Gradient Methods for Smooth Structured Nonconvex-Nonconcave Minimax Problems
Published:
论文阅读笔记: Efficient Methods for Structured Nonconvex-Nonconcave Min-Max Optimization
Published:
Published:
从特征值的变分性质出发,推导樊氏迹极小化原理,从而得到谱聚类。
Published:
从Hermite矩阵的特征值不等式出发,利用奇异值推广到一般矩阵,得到最佳低秩逼近定理,并且介绍主成分分析(PCA)等应用。