A matrix extension of the Cauchy-Schwarz inequality
Section snippets
Main result
Notation 1. Let A and B be two p×p matrices. We write A≤B if and only if B−A is non-negative definite. ‖A‖ denotes the Euclidean norm of a matrix; i.e. . The transpose of A is given by A′. □
Using this notation we prove the following, very useful, inequality. Theorem 1. Let x∈p and y∈q be random vectors such that ‖x‖2<∞, ‖y‖2<∞, and yy′ is non-singular. Then, Example 1.1. (A Simple Application). As an application of the above inequality consider the regression model y=θ′x+γ′z+ε with
Conclusion
This inequality was obtained in response to a question asked by some students while I was teaching a graduate course in Econometrics. I have found it to be a handy little tool while looking at issues related to asymptotic efficiency of estimators. Although this inequality looks astonishingly familiar, I have been unable to discover any references to it in the literature that I have reviewed.
References (1)
Root-N-Consistent Semiparametric Regression
Econometrica.
(1988)
Cited by (52)
FedProc: Prototypical contrastive federated learning on non-IID data
2023, Future Generation Computer SystemsControl variables, discrete instruments, and identification of structural functions
2021, Journal of EconometricsInformation aggregation in a financial market with general signal structure
2019, Journal of Economic TheoryOn the estimation of treatment effects with endogenous misreporting
2019, Journal of Econometrics