Very slow CIFS/SMB performace












0















I switched from NFS to SMB/CIFS since the permission system of NFS annoyed me.
I never had performance issues while using NFS (1GB Lan) and had about 70-90 MB/s write and read speed while writing to my synology NAS.



I test my write performance with dd (write 500MB to my SMB mount):



[user@archStd01 Transfer]$ dd if=/dev/zero of=/home/user/NAS/Transfer/test bs=512 count=1000000
1000000+0 Datensätze ein
1000000+0 Datensätze aus
512000000 Bytes (512 MB, 488 MiB) kopiert, 675.388 s, 758 kB/s


as you can see it performed very poor with an average of 758 kB/s



My fstab:



//192.168.1.100/Transfer /home/user/NAS/Transfer cifs credentials=/home/user/.smbcredentials,uid=1000,gid=1000,vers=3.0,rw 0 0


At the moment I go trough a few SMB manuals but I didnt find much about performance problems. Does anyone know where to start?



//edit
performance test with dd with 10MB blocksize



[user@archStd01 Transfer]$ dd if=/dev/zero of=/home/user/NAS/Transfer/test bs=10M count=500
500+0 Datensätze ein
500+0 Datensätze aus
5242880000 Bytes (5.2 GB, 4.9 GiB) kopiert, 406.979 s, 12.9 MB/s


Its a lot better, but still far away from fast.










share|improve this question




















  • 1





    Try to test with a bigger block size and check if the performance stays the same.

    – Thomas
    Sep 3 '17 at 13:25






  • 1





    Try at least with (significantly) larger bs if you insist on using dd as your "benchmark".

    – sebasth
    Sep 3 '17 at 13:25











  • ok, I test with a bs of 10MB and update my original post with the results. Thanks for the hint.

    – rockZ
    Sep 3 '17 at 13:29











  • when you type mount, what does it say about the negotiated version protocol?

    – Rui F Ribeiro
    Sep 3 '17 at 14:19








  • 1





    @RuiFRibeiro //192.168.1.100/Transfer on /home/user/NAS/Transfer type cifs (rw,relatime,vers=3.0,cache=strict,username=admin,domain=ARBEITSGRUPPE,uid=1000,forceuid,gid=1000,forcegid,addr=192.168.1.100,file_mode=0755,dir_mode=0755,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1) it says: vers=3.0 ... should be good.

    – rockZ
    Sep 3 '17 at 14:24


















0















I switched from NFS to SMB/CIFS since the permission system of NFS annoyed me.
I never had performance issues while using NFS (1GB Lan) and had about 70-90 MB/s write and read speed while writing to my synology NAS.



I test my write performance with dd (write 500MB to my SMB mount):



[user@archStd01 Transfer]$ dd if=/dev/zero of=/home/user/NAS/Transfer/test bs=512 count=1000000
1000000+0 Datensätze ein
1000000+0 Datensätze aus
512000000 Bytes (512 MB, 488 MiB) kopiert, 675.388 s, 758 kB/s


as you can see it performed very poor with an average of 758 kB/s



My fstab:



//192.168.1.100/Transfer /home/user/NAS/Transfer cifs credentials=/home/user/.smbcredentials,uid=1000,gid=1000,vers=3.0,rw 0 0


At the moment I go trough a few SMB manuals but I didnt find much about performance problems. Does anyone know where to start?



//edit
performance test with dd with 10MB blocksize



[user@archStd01 Transfer]$ dd if=/dev/zero of=/home/user/NAS/Transfer/test bs=10M count=500
500+0 Datensätze ein
500+0 Datensätze aus
5242880000 Bytes (5.2 GB, 4.9 GiB) kopiert, 406.979 s, 12.9 MB/s


Its a lot better, but still far away from fast.










share|improve this question




















  • 1





    Try to test with a bigger block size and check if the performance stays the same.

    – Thomas
    Sep 3 '17 at 13:25






  • 1





    Try at least with (significantly) larger bs if you insist on using dd as your "benchmark".

    – sebasth
    Sep 3 '17 at 13:25











  • ok, I test with a bs of 10MB and update my original post with the results. Thanks for the hint.

    – rockZ
    Sep 3 '17 at 13:29











  • when you type mount, what does it say about the negotiated version protocol?

    – Rui F Ribeiro
    Sep 3 '17 at 14:19








  • 1





    @RuiFRibeiro //192.168.1.100/Transfer on /home/user/NAS/Transfer type cifs (rw,relatime,vers=3.0,cache=strict,username=admin,domain=ARBEITSGRUPPE,uid=1000,forceuid,gid=1000,forcegid,addr=192.168.1.100,file_mode=0755,dir_mode=0755,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1) it says: vers=3.0 ... should be good.

    – rockZ
    Sep 3 '17 at 14:24
















0












0








0








I switched from NFS to SMB/CIFS since the permission system of NFS annoyed me.
I never had performance issues while using NFS (1GB Lan) and had about 70-90 MB/s write and read speed while writing to my synology NAS.



I test my write performance with dd (write 500MB to my SMB mount):



[user@archStd01 Transfer]$ dd if=/dev/zero of=/home/user/NAS/Transfer/test bs=512 count=1000000
1000000+0 Datensätze ein
1000000+0 Datensätze aus
512000000 Bytes (512 MB, 488 MiB) kopiert, 675.388 s, 758 kB/s


as you can see it performed very poor with an average of 758 kB/s



My fstab:



//192.168.1.100/Transfer /home/user/NAS/Transfer cifs credentials=/home/user/.smbcredentials,uid=1000,gid=1000,vers=3.0,rw 0 0


At the moment I go trough a few SMB manuals but I didnt find much about performance problems. Does anyone know where to start?



//edit
performance test with dd with 10MB blocksize



[user@archStd01 Transfer]$ dd if=/dev/zero of=/home/user/NAS/Transfer/test bs=10M count=500
500+0 Datensätze ein
500+0 Datensätze aus
5242880000 Bytes (5.2 GB, 4.9 GiB) kopiert, 406.979 s, 12.9 MB/s


Its a lot better, but still far away from fast.










share|improve this question
















I switched from NFS to SMB/CIFS since the permission system of NFS annoyed me.
I never had performance issues while using NFS (1GB Lan) and had about 70-90 MB/s write and read speed while writing to my synology NAS.



I test my write performance with dd (write 500MB to my SMB mount):



[user@archStd01 Transfer]$ dd if=/dev/zero of=/home/user/NAS/Transfer/test bs=512 count=1000000
1000000+0 Datensätze ein
1000000+0 Datensätze aus
512000000 Bytes (512 MB, 488 MiB) kopiert, 675.388 s, 758 kB/s


as you can see it performed very poor with an average of 758 kB/s



My fstab:



//192.168.1.100/Transfer /home/user/NAS/Transfer cifs credentials=/home/user/.smbcredentials,uid=1000,gid=1000,vers=3.0,rw 0 0


At the moment I go trough a few SMB manuals but I didnt find much about performance problems. Does anyone know where to start?



//edit
performance test with dd with 10MB blocksize



[user@archStd01 Transfer]$ dd if=/dev/zero of=/home/user/NAS/Transfer/test bs=10M count=500
500+0 Datensätze ein
500+0 Datensätze aus
5242880000 Bytes (5.2 GB, 4.9 GiB) kopiert, 406.979 s, 12.9 MB/s


Its a lot better, but still far away from fast.







cifs smb nas






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Sep 3 '17 at 13:35







rockZ

















asked Sep 3 '17 at 13:21









rockZrockZ

1271310




1271310








  • 1





    Try to test with a bigger block size and check if the performance stays the same.

    – Thomas
    Sep 3 '17 at 13:25






  • 1





    Try at least with (significantly) larger bs if you insist on using dd as your "benchmark".

    – sebasth
    Sep 3 '17 at 13:25











  • ok, I test with a bs of 10MB and update my original post with the results. Thanks for the hint.

    – rockZ
    Sep 3 '17 at 13:29











  • when you type mount, what does it say about the negotiated version protocol?

    – Rui F Ribeiro
    Sep 3 '17 at 14:19








  • 1





    @RuiFRibeiro //192.168.1.100/Transfer on /home/user/NAS/Transfer type cifs (rw,relatime,vers=3.0,cache=strict,username=admin,domain=ARBEITSGRUPPE,uid=1000,forceuid,gid=1000,forcegid,addr=192.168.1.100,file_mode=0755,dir_mode=0755,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1) it says: vers=3.0 ... should be good.

    – rockZ
    Sep 3 '17 at 14:24
















  • 1





    Try to test with a bigger block size and check if the performance stays the same.

    – Thomas
    Sep 3 '17 at 13:25






  • 1





    Try at least with (significantly) larger bs if you insist on using dd as your "benchmark".

    – sebasth
    Sep 3 '17 at 13:25











  • ok, I test with a bs of 10MB and update my original post with the results. Thanks for the hint.

    – rockZ
    Sep 3 '17 at 13:29











  • when you type mount, what does it say about the negotiated version protocol?

    – Rui F Ribeiro
    Sep 3 '17 at 14:19








  • 1





    @RuiFRibeiro //192.168.1.100/Transfer on /home/user/NAS/Transfer type cifs (rw,relatime,vers=3.0,cache=strict,username=admin,domain=ARBEITSGRUPPE,uid=1000,forceuid,gid=1000,forcegid,addr=192.168.1.100,file_mode=0755,dir_mode=0755,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1) it says: vers=3.0 ... should be good.

    – rockZ
    Sep 3 '17 at 14:24










1




1





Try to test with a bigger block size and check if the performance stays the same.

– Thomas
Sep 3 '17 at 13:25





Try to test with a bigger block size and check if the performance stays the same.

– Thomas
Sep 3 '17 at 13:25




1




1





Try at least with (significantly) larger bs if you insist on using dd as your "benchmark".

– sebasth
Sep 3 '17 at 13:25





Try at least with (significantly) larger bs if you insist on using dd as your "benchmark".

– sebasth
Sep 3 '17 at 13:25













ok, I test with a bs of 10MB and update my original post with the results. Thanks for the hint.

– rockZ
Sep 3 '17 at 13:29





ok, I test with a bs of 10MB and update my original post with the results. Thanks for the hint.

– rockZ
Sep 3 '17 at 13:29













when you type mount, what does it say about the negotiated version protocol?

– Rui F Ribeiro
Sep 3 '17 at 14:19







when you type mount, what does it say about the negotiated version protocol?

– Rui F Ribeiro
Sep 3 '17 at 14:19






1




1





@RuiFRibeiro //192.168.1.100/Transfer on /home/user/NAS/Transfer type cifs (rw,relatime,vers=3.0,cache=strict,username=admin,domain=ARBEITSGRUPPE,uid=1000,forceuid,gid=1000,forcegid,addr=192.168.1.100,file_mode=0755,dir_mode=0755,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1) it says: vers=3.0 ... should be good.

– rockZ
Sep 3 '17 at 14:24







@RuiFRibeiro //192.168.1.100/Transfer on /home/user/NAS/Transfer type cifs (rw,relatime,vers=3.0,cache=strict,username=admin,domain=ARBEITSGRUPPE,uid=1000,forceuid,gid=1000,forcegid,addr=192.168.1.100,file_mode=0755,dir_mode=0755,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1) it says: vers=3.0 ... should be good.

– rockZ
Sep 3 '17 at 14:24












1 Answer
1






active

oldest

votes


















0














I was just puzzling over a similar sounding CIFS performance problem. Transfers to and from a Windows client and our Samba server had good speed, but downloads from the server to two Ubuntu machines (running bionic) were slow. Using SCP to transfer instead of CIFS had no speed problems, so the problem wasn't the underlying network. Following the suggestions on this ubuntuforums thread, I tried adding cache=loose to my Ubuntu client's cifs mount configuration in /etc/fstab, and speeds in both directions are now about what I expect (about x7-10 improvement in my case).



//server/share /media/localMountPoint cifs cache=loose,rw,...


However, as the poster cautions over on the ubuntuforums thread, according to the mount.cifs man page:




cache=loose can cause data corruption when multiple readers and writers are working on the same files.




I happen to be on a home network with very few users, so this is acceptable for me.






share|improve this answer








New contributor




rrey is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "106"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f390093%2fvery-slow-cifs-smb-performace%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    I was just puzzling over a similar sounding CIFS performance problem. Transfers to and from a Windows client and our Samba server had good speed, but downloads from the server to two Ubuntu machines (running bionic) were slow. Using SCP to transfer instead of CIFS had no speed problems, so the problem wasn't the underlying network. Following the suggestions on this ubuntuforums thread, I tried adding cache=loose to my Ubuntu client's cifs mount configuration in /etc/fstab, and speeds in both directions are now about what I expect (about x7-10 improvement in my case).



    //server/share /media/localMountPoint cifs cache=loose,rw,...


    However, as the poster cautions over on the ubuntuforums thread, according to the mount.cifs man page:




    cache=loose can cause data corruption when multiple readers and writers are working on the same files.




    I happen to be on a home network with very few users, so this is acceptable for me.






    share|improve this answer








    New contributor




    rrey is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.

























      0














      I was just puzzling over a similar sounding CIFS performance problem. Transfers to and from a Windows client and our Samba server had good speed, but downloads from the server to two Ubuntu machines (running bionic) were slow. Using SCP to transfer instead of CIFS had no speed problems, so the problem wasn't the underlying network. Following the suggestions on this ubuntuforums thread, I tried adding cache=loose to my Ubuntu client's cifs mount configuration in /etc/fstab, and speeds in both directions are now about what I expect (about x7-10 improvement in my case).



      //server/share /media/localMountPoint cifs cache=loose,rw,...


      However, as the poster cautions over on the ubuntuforums thread, according to the mount.cifs man page:




      cache=loose can cause data corruption when multiple readers and writers are working on the same files.




      I happen to be on a home network with very few users, so this is acceptable for me.






      share|improve this answer








      New contributor




      rrey is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.























        0












        0








        0







        I was just puzzling over a similar sounding CIFS performance problem. Transfers to and from a Windows client and our Samba server had good speed, but downloads from the server to two Ubuntu machines (running bionic) were slow. Using SCP to transfer instead of CIFS had no speed problems, so the problem wasn't the underlying network. Following the suggestions on this ubuntuforums thread, I tried adding cache=loose to my Ubuntu client's cifs mount configuration in /etc/fstab, and speeds in both directions are now about what I expect (about x7-10 improvement in my case).



        //server/share /media/localMountPoint cifs cache=loose,rw,...


        However, as the poster cautions over on the ubuntuforums thread, according to the mount.cifs man page:




        cache=loose can cause data corruption when multiple readers and writers are working on the same files.




        I happen to be on a home network with very few users, so this is acceptable for me.






        share|improve this answer








        New contributor




        rrey is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.










        I was just puzzling over a similar sounding CIFS performance problem. Transfers to and from a Windows client and our Samba server had good speed, but downloads from the server to two Ubuntu machines (running bionic) were slow. Using SCP to transfer instead of CIFS had no speed problems, so the problem wasn't the underlying network. Following the suggestions on this ubuntuforums thread, I tried adding cache=loose to my Ubuntu client's cifs mount configuration in /etc/fstab, and speeds in both directions are now about what I expect (about x7-10 improvement in my case).



        //server/share /media/localMountPoint cifs cache=loose,rw,...


        However, as the poster cautions over on the ubuntuforums thread, according to the mount.cifs man page:




        cache=loose can cause data corruption when multiple readers and writers are working on the same files.




        I happen to be on a home network with very few users, so this is acceptable for me.







        share|improve this answer








        New contributor




        rrey is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        share|improve this answer



        share|improve this answer






        New contributor




        rrey is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        answered 20 mins ago









        rreyrrey

        1




        1




        New contributor




        rrey is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.





        New contributor





        rrey is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.






        rrey is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Unix & Linux Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f390093%2fvery-slow-cifs-smb-performace%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            濃尾地震

            How to rewrite equation of hyperbola in standard form

            No ethernet ip address in my vocore2