Can the alpha, lambda values of a glmnet object output determine whether ridge or Lasso?












2












$begingroup$


Given a glmnet object using train() where trControl method is "cv" and number of iterations is 5, I obtained that the bestTune alpha and lambda values are alpha=0.1 and lambda= 0.007688342. On running the glmnet object, I notice that the alpha values start from 0.1.
Can the inference here be that the method used is Lasso and not ridge because of the non-negative alpha value?



In general, can the values of alpha, lambda indicate which model is being used?










share|cite|improve this question







New contributor




red4life93 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$

















    2












    $begingroup$


    Given a glmnet object using train() where trControl method is "cv" and number of iterations is 5, I obtained that the bestTune alpha and lambda values are alpha=0.1 and lambda= 0.007688342. On running the glmnet object, I notice that the alpha values start from 0.1.
    Can the inference here be that the method used is Lasso and not ridge because of the non-negative alpha value?



    In general, can the values of alpha, lambda indicate which model is being used?










    share|cite|improve this question







    New contributor




    red4life93 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$















      2












      2








      2





      $begingroup$


      Given a glmnet object using train() where trControl method is "cv" and number of iterations is 5, I obtained that the bestTune alpha and lambda values are alpha=0.1 and lambda= 0.007688342. On running the glmnet object, I notice that the alpha values start from 0.1.
      Can the inference here be that the method used is Lasso and not ridge because of the non-negative alpha value?



      In general, can the values of alpha, lambda indicate which model is being used?










      share|cite|improve this question







      New contributor




      red4life93 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$




      Given a glmnet object using train() where trControl method is "cv" and number of iterations is 5, I obtained that the bestTune alpha and lambda values are alpha=0.1 and lambda= 0.007688342. On running the glmnet object, I notice that the alpha values start from 0.1.
      Can the inference here be that the method used is Lasso and not ridge because of the non-negative alpha value?



      In general, can the values of alpha, lambda indicate which model is being used?







      regression generalized-linear-model cross-validation caret






      share|cite|improve this question







      New contributor




      red4life93 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|cite|improve this question







      New contributor




      red4life93 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|cite|improve this question




      share|cite|improve this question






      New contributor




      red4life93 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 6 hours ago









      red4life93red4life93

      161




      161




      New contributor




      red4life93 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      red4life93 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      red4life93 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






















          2 Answers
          2






          active

          oldest

          votes


















          2












          $begingroup$

          As far as I understand glmnet, $alpha=0$ would actually be a ridge penalty, and $alpha=1$ would be a Lasso penalty (rather than the other way around) and as far as glmnet is concerned you can fit those end cases.



          The penalty with $alpha=0.1$ would be fairly similar to the ridge penalty but it is not the ridge penalty; if it's not considering $alpha$ below $0.1$ you can't necessarily infer much more than that just from the fact that you had that endpoint. If you know that an $alpha$ value that was only slightly larger was worse then it would be likely that a larger range might have chosen a smaller $alpha$, but it doesn't suggest it would have been $0$; I expect it would not. If the grid of values is coarse it may well have been that a larger value than $0.1$ would be better.



          [You may want to check whether there was some other reason that $alpha$ might have been at an endpoint; e.g. I seem to recall $lambda$ got set to an endpoint in forecasting if coefficients for lambdaOpt were not saved.]






          share|cite|improve this answer











          $endgroup$





















            1












            $begingroup$

            Absolutely! The $alpha$ parameter can be adjusted to either fit a Lasso or a Ridge regression (or something in between). Recall that the loss function which Elastic Net minimizes is $$frac{1}{2N}sum^N_{i=1}(y_i-beta_0-x_i^tbeta)^2+lambdasum_{j=1}^p(frac{1}{2}(1-alpha)beta_j^2+alpha|beta_j|).$$
            Focus on the second big sum (the one multiplied by $lambda$). If you let $alpha=1$, the first term inside this sum becomes $0$, and the whole function becomes exactly the function that Lasso minimizes (or the Lasso loss function). If you let $alpha=0$, the second term becomes $0$ and you are left with Ridge.



            You can check the loss for Ridge and Lasso in this book and for elastic net in this paper.






            share|cite|improve this answer









            $endgroup$













            • $begingroup$
              This looks like a good answer but can you edit to include citations for the hyperlinks? Over time, links die.
              $endgroup$
              – Sycorax
              6 hours ago











            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "65"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });






            red4life93 is a new contributor. Be nice, and check out our Code of Conduct.










            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f396748%2fcan-the-alpha-lambda-values-of-a-glmnet-object-output-determine-whether-ridge-o%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            2












            $begingroup$

            As far as I understand glmnet, $alpha=0$ would actually be a ridge penalty, and $alpha=1$ would be a Lasso penalty (rather than the other way around) and as far as glmnet is concerned you can fit those end cases.



            The penalty with $alpha=0.1$ would be fairly similar to the ridge penalty but it is not the ridge penalty; if it's not considering $alpha$ below $0.1$ you can't necessarily infer much more than that just from the fact that you had that endpoint. If you know that an $alpha$ value that was only slightly larger was worse then it would be likely that a larger range might have chosen a smaller $alpha$, but it doesn't suggest it would have been $0$; I expect it would not. If the grid of values is coarse it may well have been that a larger value than $0.1$ would be better.



            [You may want to check whether there was some other reason that $alpha$ might have been at an endpoint; e.g. I seem to recall $lambda$ got set to an endpoint in forecasting if coefficients for lambdaOpt were not saved.]






            share|cite|improve this answer











            $endgroup$


















              2












              $begingroup$

              As far as I understand glmnet, $alpha=0$ would actually be a ridge penalty, and $alpha=1$ would be a Lasso penalty (rather than the other way around) and as far as glmnet is concerned you can fit those end cases.



              The penalty with $alpha=0.1$ would be fairly similar to the ridge penalty but it is not the ridge penalty; if it's not considering $alpha$ below $0.1$ you can't necessarily infer much more than that just from the fact that you had that endpoint. If you know that an $alpha$ value that was only slightly larger was worse then it would be likely that a larger range might have chosen a smaller $alpha$, but it doesn't suggest it would have been $0$; I expect it would not. If the grid of values is coarse it may well have been that a larger value than $0.1$ would be better.



              [You may want to check whether there was some other reason that $alpha$ might have been at an endpoint; e.g. I seem to recall $lambda$ got set to an endpoint in forecasting if coefficients for lambdaOpt were not saved.]






              share|cite|improve this answer











              $endgroup$
















                2












                2








                2





                $begingroup$

                As far as I understand glmnet, $alpha=0$ would actually be a ridge penalty, and $alpha=1$ would be a Lasso penalty (rather than the other way around) and as far as glmnet is concerned you can fit those end cases.



                The penalty with $alpha=0.1$ would be fairly similar to the ridge penalty but it is not the ridge penalty; if it's not considering $alpha$ below $0.1$ you can't necessarily infer much more than that just from the fact that you had that endpoint. If you know that an $alpha$ value that was only slightly larger was worse then it would be likely that a larger range might have chosen a smaller $alpha$, but it doesn't suggest it would have been $0$; I expect it would not. If the grid of values is coarse it may well have been that a larger value than $0.1$ would be better.



                [You may want to check whether there was some other reason that $alpha$ might have been at an endpoint; e.g. I seem to recall $lambda$ got set to an endpoint in forecasting if coefficients for lambdaOpt were not saved.]






                share|cite|improve this answer











                $endgroup$



                As far as I understand glmnet, $alpha=0$ would actually be a ridge penalty, and $alpha=1$ would be a Lasso penalty (rather than the other way around) and as far as glmnet is concerned you can fit those end cases.



                The penalty with $alpha=0.1$ would be fairly similar to the ridge penalty but it is not the ridge penalty; if it's not considering $alpha$ below $0.1$ you can't necessarily infer much more than that just from the fact that you had that endpoint. If you know that an $alpha$ value that was only slightly larger was worse then it would be likely that a larger range might have chosen a smaller $alpha$, but it doesn't suggest it would have been $0$; I expect it would not. If the grid of values is coarse it may well have been that a larger value than $0.1$ would be better.



                [You may want to check whether there was some other reason that $alpha$ might have been at an endpoint; e.g. I seem to recall $lambda$ got set to an endpoint in forecasting if coefficients for lambdaOpt were not saved.]







                share|cite|improve this answer














                share|cite|improve this answer



                share|cite|improve this answer








                edited 6 hours ago

























                answered 6 hours ago









                Glen_bGlen_b

                213k22412762




                213k22412762

























                    1












                    $begingroup$

                    Absolutely! The $alpha$ parameter can be adjusted to either fit a Lasso or a Ridge regression (or something in between). Recall that the loss function which Elastic Net minimizes is $$frac{1}{2N}sum^N_{i=1}(y_i-beta_0-x_i^tbeta)^2+lambdasum_{j=1}^p(frac{1}{2}(1-alpha)beta_j^2+alpha|beta_j|).$$
                    Focus on the second big sum (the one multiplied by $lambda$). If you let $alpha=1$, the first term inside this sum becomes $0$, and the whole function becomes exactly the function that Lasso minimizes (or the Lasso loss function). If you let $alpha=0$, the second term becomes $0$ and you are left with Ridge.



                    You can check the loss for Ridge and Lasso in this book and for elastic net in this paper.






                    share|cite|improve this answer









                    $endgroup$













                    • $begingroup$
                      This looks like a good answer but can you edit to include citations for the hyperlinks? Over time, links die.
                      $endgroup$
                      – Sycorax
                      6 hours ago
















                    1












                    $begingroup$

                    Absolutely! The $alpha$ parameter can be adjusted to either fit a Lasso or a Ridge regression (or something in between). Recall that the loss function which Elastic Net minimizes is $$frac{1}{2N}sum^N_{i=1}(y_i-beta_0-x_i^tbeta)^2+lambdasum_{j=1}^p(frac{1}{2}(1-alpha)beta_j^2+alpha|beta_j|).$$
                    Focus on the second big sum (the one multiplied by $lambda$). If you let $alpha=1$, the first term inside this sum becomes $0$, and the whole function becomes exactly the function that Lasso minimizes (or the Lasso loss function). If you let $alpha=0$, the second term becomes $0$ and you are left with Ridge.



                    You can check the loss for Ridge and Lasso in this book and for elastic net in this paper.






                    share|cite|improve this answer









                    $endgroup$













                    • $begingroup$
                      This looks like a good answer but can you edit to include citations for the hyperlinks? Over time, links die.
                      $endgroup$
                      – Sycorax
                      6 hours ago














                    1












                    1








                    1





                    $begingroup$

                    Absolutely! The $alpha$ parameter can be adjusted to either fit a Lasso or a Ridge regression (or something in between). Recall that the loss function which Elastic Net minimizes is $$frac{1}{2N}sum^N_{i=1}(y_i-beta_0-x_i^tbeta)^2+lambdasum_{j=1}^p(frac{1}{2}(1-alpha)beta_j^2+alpha|beta_j|).$$
                    Focus on the second big sum (the one multiplied by $lambda$). If you let $alpha=1$, the first term inside this sum becomes $0$, and the whole function becomes exactly the function that Lasso minimizes (or the Lasso loss function). If you let $alpha=0$, the second term becomes $0$ and you are left with Ridge.



                    You can check the loss for Ridge and Lasso in this book and for elastic net in this paper.






                    share|cite|improve this answer









                    $endgroup$



                    Absolutely! The $alpha$ parameter can be adjusted to either fit a Lasso or a Ridge regression (or something in between). Recall that the loss function which Elastic Net minimizes is $$frac{1}{2N}sum^N_{i=1}(y_i-beta_0-x_i^tbeta)^2+lambdasum_{j=1}^p(frac{1}{2}(1-alpha)beta_j^2+alpha|beta_j|).$$
                    Focus on the second big sum (the one multiplied by $lambda$). If you let $alpha=1$, the first term inside this sum becomes $0$, and the whole function becomes exactly the function that Lasso minimizes (or the Lasso loss function). If you let $alpha=0$, the second term becomes $0$ and you are left with Ridge.



                    You can check the loss for Ridge and Lasso in this book and for elastic net in this paper.







                    share|cite|improve this answer












                    share|cite|improve this answer



                    share|cite|improve this answer










                    answered 6 hours ago









                    BananinBananin

                    1795




                    1795












                    • $begingroup$
                      This looks like a good answer but can you edit to include citations for the hyperlinks? Over time, links die.
                      $endgroup$
                      – Sycorax
                      6 hours ago


















                    • $begingroup$
                      This looks like a good answer but can you edit to include citations for the hyperlinks? Over time, links die.
                      $endgroup$
                      – Sycorax
                      6 hours ago
















                    $begingroup$
                    This looks like a good answer but can you edit to include citations for the hyperlinks? Over time, links die.
                    $endgroup$
                    – Sycorax
                    6 hours ago




                    $begingroup$
                    This looks like a good answer but can you edit to include citations for the hyperlinks? Over time, links die.
                    $endgroup$
                    – Sycorax
                    6 hours ago










                    red4life93 is a new contributor. Be nice, and check out our Code of Conduct.










                    draft saved

                    draft discarded


















                    red4life93 is a new contributor. Be nice, and check out our Code of Conduct.













                    red4life93 is a new contributor. Be nice, and check out our Code of Conduct.












                    red4life93 is a new contributor. Be nice, and check out our Code of Conduct.
















                    Thanks for contributing an answer to Cross Validated!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f396748%2fcan-the-alpha-lambda-values-of-a-glmnet-object-output-determine-whether-ridge-o%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    CARDNET

                    Boot-repair Failure: Unable to locate package grub-common:i386

                    濃尾地震