How to recover RAM space that should be free?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}
Many people use ps_mem.py script in order to know how much RAM processes use. In this case the result of the script was something like this:
---------------------------------
278.4 MiB
=================================
So the whole system uses 278.4 MiB, but free
says something really different:
# free
total used free shared buff/cache available
Mem: 1.8G 756M 980M 57M 131M 1.0G
Swap: 2.5G 11M 2.5G
Total: 4.3G 767M 3.4G
So here, the system utilizes 756M. It's not the cache and it's not because of the tmp files.
I also tried:
# echo "3" > /proc/sys/vm/drop_caches
in order to see whether there will be any difference, but nothing changed.
So how to release the pages that are taken for some reason? I have no idea what and why utilizes the space, and I don't know how to recover it. For now the only option is to reboot the machine.
Here's the photo where you can see what processes left. Can you explain the RAM utilization based on that?
ram
bumped to the homepage by Community♦ 4 hours ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
Many people use ps_mem.py script in order to know how much RAM processes use. In this case the result of the script was something like this:
---------------------------------
278.4 MiB
=================================
So the whole system uses 278.4 MiB, but free
says something really different:
# free
total used free shared buff/cache available
Mem: 1.8G 756M 980M 57M 131M 1.0G
Swap: 2.5G 11M 2.5G
Total: 4.3G 767M 3.4G
So here, the system utilizes 756M. It's not the cache and it's not because of the tmp files.
I also tried:
# echo "3" > /proc/sys/vm/drop_caches
in order to see whether there will be any difference, but nothing changed.
So how to release the pages that are taken for some reason? I have no idea what and why utilizes the space, and I don't know how to recover it. For now the only option is to reboot the machine.
Here's the photo where you can see what processes left. Can you explain the RAM utilization based on that?
ram
bumped to the homepage by Community♦ 4 hours ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
Out of interest, is something complaining that there isn't enough RAM for it to run?
– forquare
May 8 '16 at 18:27
I don't see anything like that. I think every app works just fine.
– Mikhail Morfikov
May 8 '16 at 18:31
Linux uses free memory to cache files to avoid retrieving them from the disk. That memory is going to be free'd as soon as someone needs it.
– tiktak
May 8 '16 at 18:43
add a comment |
Many people use ps_mem.py script in order to know how much RAM processes use. In this case the result of the script was something like this:
---------------------------------
278.4 MiB
=================================
So the whole system uses 278.4 MiB, but free
says something really different:
# free
total used free shared buff/cache available
Mem: 1.8G 756M 980M 57M 131M 1.0G
Swap: 2.5G 11M 2.5G
Total: 4.3G 767M 3.4G
So here, the system utilizes 756M. It's not the cache and it's not because of the tmp files.
I also tried:
# echo "3" > /proc/sys/vm/drop_caches
in order to see whether there will be any difference, but nothing changed.
So how to release the pages that are taken for some reason? I have no idea what and why utilizes the space, and I don't know how to recover it. For now the only option is to reboot the machine.
Here's the photo where you can see what processes left. Can you explain the RAM utilization based on that?
ram
Many people use ps_mem.py script in order to know how much RAM processes use. In this case the result of the script was something like this:
---------------------------------
278.4 MiB
=================================
So the whole system uses 278.4 MiB, but free
says something really different:
# free
total used free shared buff/cache available
Mem: 1.8G 756M 980M 57M 131M 1.0G
Swap: 2.5G 11M 2.5G
Total: 4.3G 767M 3.4G
So here, the system utilizes 756M. It's not the cache and it's not because of the tmp files.
I also tried:
# echo "3" > /proc/sys/vm/drop_caches
in order to see whether there will be any difference, but nothing changed.
So how to release the pages that are taken for some reason? I have no idea what and why utilizes the space, and I don't know how to recover it. For now the only option is to reboot the machine.
Here's the photo where you can see what processes left. Can you explain the RAM utilization based on that?
ram
ram
edited May 8 '16 at 19:14
Mikhail Morfikov
asked May 8 '16 at 18:15
Mikhail MorfikovMikhail Morfikov
4,545124574
4,545124574
bumped to the homepage by Community♦ 4 hours ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
bumped to the homepage by Community♦ 4 hours ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
Out of interest, is something complaining that there isn't enough RAM for it to run?
– forquare
May 8 '16 at 18:27
I don't see anything like that. I think every app works just fine.
– Mikhail Morfikov
May 8 '16 at 18:31
Linux uses free memory to cache files to avoid retrieving them from the disk. That memory is going to be free'd as soon as someone needs it.
– tiktak
May 8 '16 at 18:43
add a comment |
Out of interest, is something complaining that there isn't enough RAM for it to run?
– forquare
May 8 '16 at 18:27
I don't see anything like that. I think every app works just fine.
– Mikhail Morfikov
May 8 '16 at 18:31
Linux uses free memory to cache files to avoid retrieving them from the disk. That memory is going to be free'd as soon as someone needs it.
– tiktak
May 8 '16 at 18:43
Out of interest, is something complaining that there isn't enough RAM for it to run?
– forquare
May 8 '16 at 18:27
Out of interest, is something complaining that there isn't enough RAM for it to run?
– forquare
May 8 '16 at 18:27
I don't see anything like that. I think every app works just fine.
– Mikhail Morfikov
May 8 '16 at 18:31
I don't see anything like that. I think every app works just fine.
– Mikhail Morfikov
May 8 '16 at 18:31
Linux uses free memory to cache files to avoid retrieving them from the disk. That memory is going to be free'd as soon as someone needs it.
– tiktak
May 8 '16 at 18:43
Linux uses free memory to cache files to avoid retrieving them from the disk. That memory is going to be free'd as soon as someone needs it.
– tiktak
May 8 '16 at 18:43
add a comment |
1 Answer
1
active
oldest
votes
Due to complexity of virtual memory management (all of which is for the benefit of using the least amount possible), it's practically impossible to determine how much RAM is actually in use. See this link.
So whatever your python script reports, it won't reflect the actual state.
What's cached is actually free, no worries about that (dropping caches doesn't actually free anything, it just flushes the pages kernel would reuse if it needed them).
What free reports is what the kernel knows about the system. It can't be wrong, as the kernel is the one that's serving the memory. But it doesn't equal the sum of used memory of each individual process, because of various mechanisms: shared memory (libraries), copy-on-write memory (after forking, only pages that are touched are actually duplicated), uninitialized (zeroed out) pages, loaded program code (also shared), virtual memory not corresponding to RAM, swapped out pages, interprocess shared memory, kernel memory (reserved by kernel modules), kernel memory (the main kernel itself, including the page table), and so on...
The point is... if kernel reports pages as used, they must be used for something. If you want to free up some memory, it must come from somewhere: each running process, module and the kernel itself, may have mechanisms to release some memory they may not need now and will reload later (it's up to application writers if they saw the need to implement needed complexity to do that). But the kernel will take care of it if you really need new memory. It'll drop filesystem caches if you request more memory, it will push stale pages to swap if you are using it, if you use zram or something like that, it'll compress pages instead... and at the end, if you really run out of space, OOM killer will find kill something to prevent the system from locking up. But the process is way too complicated for you to think you know better than whatever is going on inside.
So is there no way to recover the space and to know what uses it? You know, I think all the processes that runs right now on my machine are the same when the pc starts. So after starting there's ~300MiB, now 750M+ and I don't know why.
– Mikhail Morfikov
May 8 '16 at 18:45
You can find out who's using how much fromps
output, it's just that you don't know what's counted twice, thrice, or maybe zero times (lazy allocation). But it should show you which process is hogging the resources if you observe the change in usage over time. If it's not that (if your tool really shows realistically all that may be used by userspace processes), then it may be some kernel module. In that case, read up on this: bbs.archlinux.org/viewtopic.php?id=108421
– orion
May 8 '16 at 18:48
stackoverflow.com/questions/662526/…
– orion
May 8 '16 at 18:48
It's none of it. I just updated the question. There's a picture from TTY depicting processes that left. I just wanted to kill every process that would free the RAM space, and I failed. Any idea?
– Mikhail Morfikov
May 8 '16 at 19:14
Run lsmod and tell us which module uses the most memory?
– orion
May 8 '16 at 19:27
|
show 4 more comments
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "106"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f281871%2fhow-to-recover-ram-space-that-should-be-free%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
Due to complexity of virtual memory management (all of which is for the benefit of using the least amount possible), it's practically impossible to determine how much RAM is actually in use. See this link.
So whatever your python script reports, it won't reflect the actual state.
What's cached is actually free, no worries about that (dropping caches doesn't actually free anything, it just flushes the pages kernel would reuse if it needed them).
What free reports is what the kernel knows about the system. It can't be wrong, as the kernel is the one that's serving the memory. But it doesn't equal the sum of used memory of each individual process, because of various mechanisms: shared memory (libraries), copy-on-write memory (after forking, only pages that are touched are actually duplicated), uninitialized (zeroed out) pages, loaded program code (also shared), virtual memory not corresponding to RAM, swapped out pages, interprocess shared memory, kernel memory (reserved by kernel modules), kernel memory (the main kernel itself, including the page table), and so on...
The point is... if kernel reports pages as used, they must be used for something. If you want to free up some memory, it must come from somewhere: each running process, module and the kernel itself, may have mechanisms to release some memory they may not need now and will reload later (it's up to application writers if they saw the need to implement needed complexity to do that). But the kernel will take care of it if you really need new memory. It'll drop filesystem caches if you request more memory, it will push stale pages to swap if you are using it, if you use zram or something like that, it'll compress pages instead... and at the end, if you really run out of space, OOM killer will find kill something to prevent the system from locking up. But the process is way too complicated for you to think you know better than whatever is going on inside.
So is there no way to recover the space and to know what uses it? You know, I think all the processes that runs right now on my machine are the same when the pc starts. So after starting there's ~300MiB, now 750M+ and I don't know why.
– Mikhail Morfikov
May 8 '16 at 18:45
You can find out who's using how much fromps
output, it's just that you don't know what's counted twice, thrice, or maybe zero times (lazy allocation). But it should show you which process is hogging the resources if you observe the change in usage over time. If it's not that (if your tool really shows realistically all that may be used by userspace processes), then it may be some kernel module. In that case, read up on this: bbs.archlinux.org/viewtopic.php?id=108421
– orion
May 8 '16 at 18:48
stackoverflow.com/questions/662526/…
– orion
May 8 '16 at 18:48
It's none of it. I just updated the question. There's a picture from TTY depicting processes that left. I just wanted to kill every process that would free the RAM space, and I failed. Any idea?
– Mikhail Morfikov
May 8 '16 at 19:14
Run lsmod and tell us which module uses the most memory?
– orion
May 8 '16 at 19:27
|
show 4 more comments
Due to complexity of virtual memory management (all of which is for the benefit of using the least amount possible), it's practically impossible to determine how much RAM is actually in use. See this link.
So whatever your python script reports, it won't reflect the actual state.
What's cached is actually free, no worries about that (dropping caches doesn't actually free anything, it just flushes the pages kernel would reuse if it needed them).
What free reports is what the kernel knows about the system. It can't be wrong, as the kernel is the one that's serving the memory. But it doesn't equal the sum of used memory of each individual process, because of various mechanisms: shared memory (libraries), copy-on-write memory (after forking, only pages that are touched are actually duplicated), uninitialized (zeroed out) pages, loaded program code (also shared), virtual memory not corresponding to RAM, swapped out pages, interprocess shared memory, kernel memory (reserved by kernel modules), kernel memory (the main kernel itself, including the page table), and so on...
The point is... if kernel reports pages as used, they must be used for something. If you want to free up some memory, it must come from somewhere: each running process, module and the kernel itself, may have mechanisms to release some memory they may not need now and will reload later (it's up to application writers if they saw the need to implement needed complexity to do that). But the kernel will take care of it if you really need new memory. It'll drop filesystem caches if you request more memory, it will push stale pages to swap if you are using it, if you use zram or something like that, it'll compress pages instead... and at the end, if you really run out of space, OOM killer will find kill something to prevent the system from locking up. But the process is way too complicated for you to think you know better than whatever is going on inside.
So is there no way to recover the space and to know what uses it? You know, I think all the processes that runs right now on my machine are the same when the pc starts. So after starting there's ~300MiB, now 750M+ and I don't know why.
– Mikhail Morfikov
May 8 '16 at 18:45
You can find out who's using how much fromps
output, it's just that you don't know what's counted twice, thrice, or maybe zero times (lazy allocation). But it should show you which process is hogging the resources if you observe the change in usage over time. If it's not that (if your tool really shows realistically all that may be used by userspace processes), then it may be some kernel module. In that case, read up on this: bbs.archlinux.org/viewtopic.php?id=108421
– orion
May 8 '16 at 18:48
stackoverflow.com/questions/662526/…
– orion
May 8 '16 at 18:48
It's none of it. I just updated the question. There's a picture from TTY depicting processes that left. I just wanted to kill every process that would free the RAM space, and I failed. Any idea?
– Mikhail Morfikov
May 8 '16 at 19:14
Run lsmod and tell us which module uses the most memory?
– orion
May 8 '16 at 19:27
|
show 4 more comments
Due to complexity of virtual memory management (all of which is for the benefit of using the least amount possible), it's practically impossible to determine how much RAM is actually in use. See this link.
So whatever your python script reports, it won't reflect the actual state.
What's cached is actually free, no worries about that (dropping caches doesn't actually free anything, it just flushes the pages kernel would reuse if it needed them).
What free reports is what the kernel knows about the system. It can't be wrong, as the kernel is the one that's serving the memory. But it doesn't equal the sum of used memory of each individual process, because of various mechanisms: shared memory (libraries), copy-on-write memory (after forking, only pages that are touched are actually duplicated), uninitialized (zeroed out) pages, loaded program code (also shared), virtual memory not corresponding to RAM, swapped out pages, interprocess shared memory, kernel memory (reserved by kernel modules), kernel memory (the main kernel itself, including the page table), and so on...
The point is... if kernel reports pages as used, they must be used for something. If you want to free up some memory, it must come from somewhere: each running process, module and the kernel itself, may have mechanisms to release some memory they may not need now and will reload later (it's up to application writers if they saw the need to implement needed complexity to do that). But the kernel will take care of it if you really need new memory. It'll drop filesystem caches if you request more memory, it will push stale pages to swap if you are using it, if you use zram or something like that, it'll compress pages instead... and at the end, if you really run out of space, OOM killer will find kill something to prevent the system from locking up. But the process is way too complicated for you to think you know better than whatever is going on inside.
Due to complexity of virtual memory management (all of which is for the benefit of using the least amount possible), it's practically impossible to determine how much RAM is actually in use. See this link.
So whatever your python script reports, it won't reflect the actual state.
What's cached is actually free, no worries about that (dropping caches doesn't actually free anything, it just flushes the pages kernel would reuse if it needed them).
What free reports is what the kernel knows about the system. It can't be wrong, as the kernel is the one that's serving the memory. But it doesn't equal the sum of used memory of each individual process, because of various mechanisms: shared memory (libraries), copy-on-write memory (after forking, only pages that are touched are actually duplicated), uninitialized (zeroed out) pages, loaded program code (also shared), virtual memory not corresponding to RAM, swapped out pages, interprocess shared memory, kernel memory (reserved by kernel modules), kernel memory (the main kernel itself, including the page table), and so on...
The point is... if kernel reports pages as used, they must be used for something. If you want to free up some memory, it must come from somewhere: each running process, module and the kernel itself, may have mechanisms to release some memory they may not need now and will reload later (it's up to application writers if they saw the need to implement needed complexity to do that). But the kernel will take care of it if you really need new memory. It'll drop filesystem caches if you request more memory, it will push stale pages to swap if you are using it, if you use zram or something like that, it'll compress pages instead... and at the end, if you really run out of space, OOM killer will find kill something to prevent the system from locking up. But the process is way too complicated for you to think you know better than whatever is going on inside.
answered May 8 '16 at 18:36
orionorion
9,2901933
9,2901933
So is there no way to recover the space and to know what uses it? You know, I think all the processes that runs right now on my machine are the same when the pc starts. So after starting there's ~300MiB, now 750M+ and I don't know why.
– Mikhail Morfikov
May 8 '16 at 18:45
You can find out who's using how much fromps
output, it's just that you don't know what's counted twice, thrice, or maybe zero times (lazy allocation). But it should show you which process is hogging the resources if you observe the change in usage over time. If it's not that (if your tool really shows realistically all that may be used by userspace processes), then it may be some kernel module. In that case, read up on this: bbs.archlinux.org/viewtopic.php?id=108421
– orion
May 8 '16 at 18:48
stackoverflow.com/questions/662526/…
– orion
May 8 '16 at 18:48
It's none of it. I just updated the question. There's a picture from TTY depicting processes that left. I just wanted to kill every process that would free the RAM space, and I failed. Any idea?
– Mikhail Morfikov
May 8 '16 at 19:14
Run lsmod and tell us which module uses the most memory?
– orion
May 8 '16 at 19:27
|
show 4 more comments
So is there no way to recover the space and to know what uses it? You know, I think all the processes that runs right now on my machine are the same when the pc starts. So after starting there's ~300MiB, now 750M+ and I don't know why.
– Mikhail Morfikov
May 8 '16 at 18:45
You can find out who's using how much fromps
output, it's just that you don't know what's counted twice, thrice, or maybe zero times (lazy allocation). But it should show you which process is hogging the resources if you observe the change in usage over time. If it's not that (if your tool really shows realistically all that may be used by userspace processes), then it may be some kernel module. In that case, read up on this: bbs.archlinux.org/viewtopic.php?id=108421
– orion
May 8 '16 at 18:48
stackoverflow.com/questions/662526/…
– orion
May 8 '16 at 18:48
It's none of it. I just updated the question. There's a picture from TTY depicting processes that left. I just wanted to kill every process that would free the RAM space, and I failed. Any idea?
– Mikhail Morfikov
May 8 '16 at 19:14
Run lsmod and tell us which module uses the most memory?
– orion
May 8 '16 at 19:27
So is there no way to recover the space and to know what uses it? You know, I think all the processes that runs right now on my machine are the same when the pc starts. So after starting there's ~300MiB, now 750M+ and I don't know why.
– Mikhail Morfikov
May 8 '16 at 18:45
So is there no way to recover the space and to know what uses it? You know, I think all the processes that runs right now on my machine are the same when the pc starts. So after starting there's ~300MiB, now 750M+ and I don't know why.
– Mikhail Morfikov
May 8 '16 at 18:45
You can find out who's using how much from
ps
output, it's just that you don't know what's counted twice, thrice, or maybe zero times (lazy allocation). But it should show you which process is hogging the resources if you observe the change in usage over time. If it's not that (if your tool really shows realistically all that may be used by userspace processes), then it may be some kernel module. In that case, read up on this: bbs.archlinux.org/viewtopic.php?id=108421– orion
May 8 '16 at 18:48
You can find out who's using how much from
ps
output, it's just that you don't know what's counted twice, thrice, or maybe zero times (lazy allocation). But it should show you which process is hogging the resources if you observe the change in usage over time. If it's not that (if your tool really shows realistically all that may be used by userspace processes), then it may be some kernel module. In that case, read up on this: bbs.archlinux.org/viewtopic.php?id=108421– orion
May 8 '16 at 18:48
stackoverflow.com/questions/662526/…
– orion
May 8 '16 at 18:48
stackoverflow.com/questions/662526/…
– orion
May 8 '16 at 18:48
It's none of it. I just updated the question. There's a picture from TTY depicting processes that left. I just wanted to kill every process that would free the RAM space, and I failed. Any idea?
– Mikhail Morfikov
May 8 '16 at 19:14
It's none of it. I just updated the question. There's a picture from TTY depicting processes that left. I just wanted to kill every process that would free the RAM space, and I failed. Any idea?
– Mikhail Morfikov
May 8 '16 at 19:14
Run lsmod and tell us which module uses the most memory?
– orion
May 8 '16 at 19:27
Run lsmod and tell us which module uses the most memory?
– orion
May 8 '16 at 19:27
|
show 4 more comments
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f281871%2fhow-to-recover-ram-space-that-should-be-free%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Out of interest, is something complaining that there isn't enough RAM for it to run?
– forquare
May 8 '16 at 18:27
I don't see anything like that. I think every app works just fine.
– Mikhail Morfikov
May 8 '16 at 18:31
Linux uses free memory to cache files to avoid retrieving them from the disk. That memory is going to be free'd as soon as someone needs it.
– tiktak
May 8 '16 at 18:43