Quantifying dependence of Cauchy random variables












4














Given two Cauchy random variables $theta_1 sim mathrm{Cauchy}(x_0^{(1)}, gamma^{(1)})$ and $theta_2 sim mathrm{Cauchy}(x_0^{(2)}, gamma^{(2)})$. That are not independent. The dependence structure of random variables can often be quantified with their covariance or correlation coefficient. However, these Cauchy random variables have no moments. Thus, covariance and correlation do not exist.



Are there other ways of representing the dependence of the random variables? Is it possible to estimate those with Monte Carlo?










share|cite|improve this question


















  • 2




    May consider general dependence metrics such as mutual information: en.wikipedia.org/wiki/Mutual_information
    – John Madden
    2 hours ago
















4














Given two Cauchy random variables $theta_1 sim mathrm{Cauchy}(x_0^{(1)}, gamma^{(1)})$ and $theta_2 sim mathrm{Cauchy}(x_0^{(2)}, gamma^{(2)})$. That are not independent. The dependence structure of random variables can often be quantified with their covariance or correlation coefficient. However, these Cauchy random variables have no moments. Thus, covariance and correlation do not exist.



Are there other ways of representing the dependence of the random variables? Is it possible to estimate those with Monte Carlo?










share|cite|improve this question


















  • 2




    May consider general dependence metrics such as mutual information: en.wikipedia.org/wiki/Mutual_information
    – John Madden
    2 hours ago














4












4








4







Given two Cauchy random variables $theta_1 sim mathrm{Cauchy}(x_0^{(1)}, gamma^{(1)})$ and $theta_2 sim mathrm{Cauchy}(x_0^{(2)}, gamma^{(2)})$. That are not independent. The dependence structure of random variables can often be quantified with their covariance or correlation coefficient. However, these Cauchy random variables have no moments. Thus, covariance and correlation do not exist.



Are there other ways of representing the dependence of the random variables? Is it possible to estimate those with Monte Carlo?










share|cite|improve this question













Given two Cauchy random variables $theta_1 sim mathrm{Cauchy}(x_0^{(1)}, gamma^{(1)})$ and $theta_2 sim mathrm{Cauchy}(x_0^{(2)}, gamma^{(2)})$. That are not independent. The dependence structure of random variables can often be quantified with their covariance or correlation coefficient. However, these Cauchy random variables have no moments. Thus, covariance and correlation do not exist.



Are there other ways of representing the dependence of the random variables? Is it possible to estimate those with Monte Carlo?







covariance independence copula heavy-tailed






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked 2 hours ago









JonasJonas

46510




46510








  • 2




    May consider general dependence metrics such as mutual information: en.wikipedia.org/wiki/Mutual_information
    – John Madden
    2 hours ago














  • 2




    May consider general dependence metrics such as mutual information: en.wikipedia.org/wiki/Mutual_information
    – John Madden
    2 hours ago








2




2




May consider general dependence metrics such as mutual information: en.wikipedia.org/wiki/Mutual_information
– John Madden
2 hours ago




May consider general dependence metrics such as mutual information: en.wikipedia.org/wiki/Mutual_information
– John Madden
2 hours ago










2 Answers
2






active

oldest

votes


















2














Just because they don't have a covariance doesn't mean that the basic $x^tSigma^{-1} x$ structure usually associated with covariances can't be used. In fact, the multivariate ($k$-dimensional) Cauchy can be written as:



$$f({mathbf x}; {mathbfmu},{mathbfSigma}, k)= frac{Gammaleft(frac{1+k}{2}right)}{Gamma(frac{1}{2})pi^{frac{k}{2}}left|{mathbfSigma}right|^{frac{1}{2}}left[1+({mathbf x}-{mathbfmu})^T{mathbfSigma}^{-1}({mathbf x}-{mathbfmu})right]^{frac{1+k}{2}}} $$



which I have lifted from the Wikipedia page. This is just a multivariate Student-$t$ distribution with one degree of freedom.



For the purposes of developing intuition, I would just use the normalized off-diagonal elements of $Sigma$ as if they were correlations, even though they are not. They reflect the strength of the linear relationship between the variables in a way very similar to that of a correlation; $Sigma$ has to be positive definite symmetric; if $Sigma$ is diagonal, the variates are independent, etc.



Maximum likelihood estimation of the parameters can be done using the E-M algorithm, which in this case is easily implemented. The log of the likelihood function is:



$$mathcal{L}(mu, Sigma) = -{nover 2}|Sigma| - {k+1 over 2}sum_{i=1}^nlog(1+s_i)$$



where $s_i = (x_i-mu)^TSigma^{-1}(x_i-mu)$. Differentiating leads to the following simple expressions:



$$mu = sum w_ix_i/sum w_i$$



$$Sigma = {1 over n}sum w_i(x-mu)(x-mu)^T$$



$$w_i = (1+k)/(1+s_i)$$



The E-M algorithm just iterates over these three expressions, substituting the most recent estimates of all the parameters at each step.



For more on this, see Estimation Methods for the Multivariate t Distribution, Nadarajah and Kotz, 2008.






share|cite|improve this answer































    1














    While $text{cov}(X,Y)$ does not exist, for a pair of variates with Cauchy marginals, $text{cov}(Phi(X),Phi(Y))$ does exist for bounded functions $Phi(cdot)$. Borrowing from the concept of copulas, one can turn $X$ and $Y$ into Uniform $(0,1)$ variates, by using their marginal cdfs, $Phi_X(X)$ and $Phi_Y(Y)$, and look at the covariance or correlation of the resulting variates.






    share|cite|improve this answer





















      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "65"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f386036%2fquantifying-dependence-of-cauchy-random-variables%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      2














      Just because they don't have a covariance doesn't mean that the basic $x^tSigma^{-1} x$ structure usually associated with covariances can't be used. In fact, the multivariate ($k$-dimensional) Cauchy can be written as:



      $$f({mathbf x}; {mathbfmu},{mathbfSigma}, k)= frac{Gammaleft(frac{1+k}{2}right)}{Gamma(frac{1}{2})pi^{frac{k}{2}}left|{mathbfSigma}right|^{frac{1}{2}}left[1+({mathbf x}-{mathbfmu})^T{mathbfSigma}^{-1}({mathbf x}-{mathbfmu})right]^{frac{1+k}{2}}} $$



      which I have lifted from the Wikipedia page. This is just a multivariate Student-$t$ distribution with one degree of freedom.



      For the purposes of developing intuition, I would just use the normalized off-diagonal elements of $Sigma$ as if they were correlations, even though they are not. They reflect the strength of the linear relationship between the variables in a way very similar to that of a correlation; $Sigma$ has to be positive definite symmetric; if $Sigma$ is diagonal, the variates are independent, etc.



      Maximum likelihood estimation of the parameters can be done using the E-M algorithm, which in this case is easily implemented. The log of the likelihood function is:



      $$mathcal{L}(mu, Sigma) = -{nover 2}|Sigma| - {k+1 over 2}sum_{i=1}^nlog(1+s_i)$$



      where $s_i = (x_i-mu)^TSigma^{-1}(x_i-mu)$. Differentiating leads to the following simple expressions:



      $$mu = sum w_ix_i/sum w_i$$



      $$Sigma = {1 over n}sum w_i(x-mu)(x-mu)^T$$



      $$w_i = (1+k)/(1+s_i)$$



      The E-M algorithm just iterates over these three expressions, substituting the most recent estimates of all the parameters at each step.



      For more on this, see Estimation Methods for the Multivariate t Distribution, Nadarajah and Kotz, 2008.






      share|cite|improve this answer




























        2














        Just because they don't have a covariance doesn't mean that the basic $x^tSigma^{-1} x$ structure usually associated with covariances can't be used. In fact, the multivariate ($k$-dimensional) Cauchy can be written as:



        $$f({mathbf x}; {mathbfmu},{mathbfSigma}, k)= frac{Gammaleft(frac{1+k}{2}right)}{Gamma(frac{1}{2})pi^{frac{k}{2}}left|{mathbfSigma}right|^{frac{1}{2}}left[1+({mathbf x}-{mathbfmu})^T{mathbfSigma}^{-1}({mathbf x}-{mathbfmu})right]^{frac{1+k}{2}}} $$



        which I have lifted from the Wikipedia page. This is just a multivariate Student-$t$ distribution with one degree of freedom.



        For the purposes of developing intuition, I would just use the normalized off-diagonal elements of $Sigma$ as if they were correlations, even though they are not. They reflect the strength of the linear relationship between the variables in a way very similar to that of a correlation; $Sigma$ has to be positive definite symmetric; if $Sigma$ is diagonal, the variates are independent, etc.



        Maximum likelihood estimation of the parameters can be done using the E-M algorithm, which in this case is easily implemented. The log of the likelihood function is:



        $$mathcal{L}(mu, Sigma) = -{nover 2}|Sigma| - {k+1 over 2}sum_{i=1}^nlog(1+s_i)$$



        where $s_i = (x_i-mu)^TSigma^{-1}(x_i-mu)$. Differentiating leads to the following simple expressions:



        $$mu = sum w_ix_i/sum w_i$$



        $$Sigma = {1 over n}sum w_i(x-mu)(x-mu)^T$$



        $$w_i = (1+k)/(1+s_i)$$



        The E-M algorithm just iterates over these three expressions, substituting the most recent estimates of all the parameters at each step.



        For more on this, see Estimation Methods for the Multivariate t Distribution, Nadarajah and Kotz, 2008.






        share|cite|improve this answer


























          2












          2








          2






          Just because they don't have a covariance doesn't mean that the basic $x^tSigma^{-1} x$ structure usually associated with covariances can't be used. In fact, the multivariate ($k$-dimensional) Cauchy can be written as:



          $$f({mathbf x}; {mathbfmu},{mathbfSigma}, k)= frac{Gammaleft(frac{1+k}{2}right)}{Gamma(frac{1}{2})pi^{frac{k}{2}}left|{mathbfSigma}right|^{frac{1}{2}}left[1+({mathbf x}-{mathbfmu})^T{mathbfSigma}^{-1}({mathbf x}-{mathbfmu})right]^{frac{1+k}{2}}} $$



          which I have lifted from the Wikipedia page. This is just a multivariate Student-$t$ distribution with one degree of freedom.



          For the purposes of developing intuition, I would just use the normalized off-diagonal elements of $Sigma$ as if they were correlations, even though they are not. They reflect the strength of the linear relationship between the variables in a way very similar to that of a correlation; $Sigma$ has to be positive definite symmetric; if $Sigma$ is diagonal, the variates are independent, etc.



          Maximum likelihood estimation of the parameters can be done using the E-M algorithm, which in this case is easily implemented. The log of the likelihood function is:



          $$mathcal{L}(mu, Sigma) = -{nover 2}|Sigma| - {k+1 over 2}sum_{i=1}^nlog(1+s_i)$$



          where $s_i = (x_i-mu)^TSigma^{-1}(x_i-mu)$. Differentiating leads to the following simple expressions:



          $$mu = sum w_ix_i/sum w_i$$



          $$Sigma = {1 over n}sum w_i(x-mu)(x-mu)^T$$



          $$w_i = (1+k)/(1+s_i)$$



          The E-M algorithm just iterates over these three expressions, substituting the most recent estimates of all the parameters at each step.



          For more on this, see Estimation Methods for the Multivariate t Distribution, Nadarajah and Kotz, 2008.






          share|cite|improve this answer














          Just because they don't have a covariance doesn't mean that the basic $x^tSigma^{-1} x$ structure usually associated with covariances can't be used. In fact, the multivariate ($k$-dimensional) Cauchy can be written as:



          $$f({mathbf x}; {mathbfmu},{mathbfSigma}, k)= frac{Gammaleft(frac{1+k}{2}right)}{Gamma(frac{1}{2})pi^{frac{k}{2}}left|{mathbfSigma}right|^{frac{1}{2}}left[1+({mathbf x}-{mathbfmu})^T{mathbfSigma}^{-1}({mathbf x}-{mathbfmu})right]^{frac{1+k}{2}}} $$



          which I have lifted from the Wikipedia page. This is just a multivariate Student-$t$ distribution with one degree of freedom.



          For the purposes of developing intuition, I would just use the normalized off-diagonal elements of $Sigma$ as if they were correlations, even though they are not. They reflect the strength of the linear relationship between the variables in a way very similar to that of a correlation; $Sigma$ has to be positive definite symmetric; if $Sigma$ is diagonal, the variates are independent, etc.



          Maximum likelihood estimation of the parameters can be done using the E-M algorithm, which in this case is easily implemented. The log of the likelihood function is:



          $$mathcal{L}(mu, Sigma) = -{nover 2}|Sigma| - {k+1 over 2}sum_{i=1}^nlog(1+s_i)$$



          where $s_i = (x_i-mu)^TSigma^{-1}(x_i-mu)$. Differentiating leads to the following simple expressions:



          $$mu = sum w_ix_i/sum w_i$$



          $$Sigma = {1 over n}sum w_i(x-mu)(x-mu)^T$$



          $$w_i = (1+k)/(1+s_i)$$



          The E-M algorithm just iterates over these three expressions, substituting the most recent estimates of all the parameters at each step.



          For more on this, see Estimation Methods for the Multivariate t Distribution, Nadarajah and Kotz, 2008.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited 2 hours ago

























          answered 2 hours ago









          jbowmanjbowman

          23.7k34278




          23.7k34278

























              1














              While $text{cov}(X,Y)$ does not exist, for a pair of variates with Cauchy marginals, $text{cov}(Phi(X),Phi(Y))$ does exist for bounded functions $Phi(cdot)$. Borrowing from the concept of copulas, one can turn $X$ and $Y$ into Uniform $(0,1)$ variates, by using their marginal cdfs, $Phi_X(X)$ and $Phi_Y(Y)$, and look at the covariance or correlation of the resulting variates.






              share|cite|improve this answer


























                1














                While $text{cov}(X,Y)$ does not exist, for a pair of variates with Cauchy marginals, $text{cov}(Phi(X),Phi(Y))$ does exist for bounded functions $Phi(cdot)$. Borrowing from the concept of copulas, one can turn $X$ and $Y$ into Uniform $(0,1)$ variates, by using their marginal cdfs, $Phi_X(X)$ and $Phi_Y(Y)$, and look at the covariance or correlation of the resulting variates.






                share|cite|improve this answer
























                  1












                  1








                  1






                  While $text{cov}(X,Y)$ does not exist, for a pair of variates with Cauchy marginals, $text{cov}(Phi(X),Phi(Y))$ does exist for bounded functions $Phi(cdot)$. Borrowing from the concept of copulas, one can turn $X$ and $Y$ into Uniform $(0,1)$ variates, by using their marginal cdfs, $Phi_X(X)$ and $Phi_Y(Y)$, and look at the covariance or correlation of the resulting variates.






                  share|cite|improve this answer












                  While $text{cov}(X,Y)$ does not exist, for a pair of variates with Cauchy marginals, $text{cov}(Phi(X),Phi(Y))$ does exist for bounded functions $Phi(cdot)$. Borrowing from the concept of copulas, one can turn $X$ and $Y$ into Uniform $(0,1)$ variates, by using their marginal cdfs, $Phi_X(X)$ and $Phi_Y(Y)$, and look at the covariance or correlation of the resulting variates.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered 2 hours ago









                  Xi'anXi'an

                  54k690348




                  54k690348






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Cross Validated!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.





                      Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                      Please pay close attention to the following guidance:


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f386036%2fquantifying-dependence-of-cauchy-random-variables%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      濃尾地震

                      How to rewrite equation of hyperbola in standard form

                      No ethernet ip address in my vocore2