[SOLVED] Whatpulse using 300MB of RAM

So previously I posted about Whatpulse using 200MB of RAM though it didn’t seem like anyone could come up with a solution. Since then I’ve updated from version 2.3 to 2.3.1 hoping that would fix it and now it’s using 300MB of RAM.

The solution from all the others posts was; don’t leave the window open. Have it closed when you’re not using it.

Doesn’t help, mine goes up to 190 MB after a while.

Closing the window for me does the same thing as it does for Andreasvb. It drops down to around 5MB and climbs its way back up to 180MB within 10-15 minutes. It also seems like the larger the database the higher the RAM usage, because on my desktop which has the largest database by far I sit around 180MB and that number has slowly been growing over time as I add more input. My laptop, which I only use sparingly, has a much smaller database because of my lack of use and only uses 60MB of RAM. Finally on my computer I almost never touch because I have just been seeding things on it only uses 35-40MB of RAM and has the smallest database because of the fact that almost all of what is recorded is just network stats. Not sure if that helps any, but that seems to be the case for me.

I usually test with small and rather large databases (± 100MB), but haven’t seen this happen. Could you put your database somewhere (dropbox/skydrive?) for me to download and run with and see what happens?

Maybe there’s something about the way your stats are recorded.

https://drive.google.com/file/d/0B1XGacZhCUwaZnpJMWsxX21Sb0U/edit?usp=sharing

Sorry for the late response. End of the semester, so a mixture of doing nothing and everything at the same time.

Here’s mine: https://mega.co.nz/#!h5cETZLA!T5Y8z2LmsvyhgCYTD7Yy9DTtpE5TVjFlE8_qYDXWeMA

I think that it may be worth looking at the upload/download stats including the unplused traffic being seen on the systems that are showing the high memory usage.

For example I get this 250-300MB RAM issue on the one machine which is typically uploading at a fairly steady rate of around 1-2MB/s to the internet. Interestingly this same machine will also climb even higher when using my second life viewer remotely over the gigabit lan which when I do I use the -c rgb method (uncompressed video for higher image quality on high bandwidth networks) when this is in use the upload on the system is typically in the 1-2MB/s to the public internet plus an additional 70-90MB/s of traffic to 10.0.0.0/8 addresses.

When the latter is in use whatpulses memory will continue to rise even further prolonged session I have seen it exceed the 1.0GiB mark. I strongly suspect that on systems with large volumes of sustained network traffic.

I did some admittedly not particularly scientific testing around this issue which appeared to support the theory that the increase in memory usage in this particular case may very well be related to the fact that whatpulse for some reason opted to go the route of full libpcap packet capturing of the all network traffic (For reference I can’t comment on such as windows operating system in this area, but for Linux/BSD and most likely Macs also this is not the optimal way to obtain the data necessary however that is a separate issue that I will try to find the time to elaborate on in a feature suggestion some time).

During the various tests I attempted I believe that I may have identified something that may point towards an optimization that could not only reduce this excessive memory consumption on these systems but also reduce the CPU load which is also significant when the host has significant network activity.

My testing mostly involved comparing the memory consumed by whatpulse and the memory consumed by the also libpcap based packet capture application tcpdump. During the course of this testing I found that both programs when started at the same time increase memory utilization initially linearly before levelling off at a peak which is somewhat proportional to traffic. Experimenting I found that whatpulse appears to do so at an average of a little over 105% of the increase with tcpdump. However this figure was only true when tcpdump was capturing the entire full length ethernet frames, this would make sense for an IDS like snort or similar which needs to perform packet payload inspection.

What does not make sense to me in my testing is that when compared with tcpdump using the -s128 option to only capture the required data and avoid capturing, buffering and processing an extra kByte or more of utterly irrelevant payload bytes then the difference between whatpulse and tcpdump grows significantly, by approximately by a multiplier slightly over the average packet size divided by 128. This indicates to me that for some reason whatpulse is not using the snaplen feature to capture the data as minimally and efficiently as possible and I am unsure why this would be the case.

Maybe I am missing something that whatpulse needs to collect for the statistics that means it needs more than just the header, these are the data points that I am aware of which whatpulse absolutely needs:

IP/IP6 Source Address (To identify downloded data from the internet)
IP/IP6 Destination Address (Same for upload to the internet)
IP/IP6 Packet Length (To add this packets bytes to the byte counter)

Whatpulse premium I believe also would require the following data points:

IP Protocol Number/IP6 Next Header (Particularly proto 9 TCP or 13 TCP)
TCP/UDP Source/Destination Port (To match the socket & application)

And then I hit the point where I am at a loss, what else does whatpulse need to collect the stats? I can’t think what I am perhaps not thinking of right now or perhaps the above list is sufficient, if it is then I would suggest checking the code and making sure that you are setting a snaplen that is no greater than 128 which is in most cases overkill in itself. Even a snaplen of 96 will be enough to capture the entire Ethernet, IPv6 and TCP headers including IPv6 and TCP options fields and into the payload I have not actually seen any standard networking software which ever seems to produce packets with longer headers, only specialized network probing and testing software and international vulnerability probes or active network attacks around the 1 packet per million sort of range.

It could be this is completely an unrelated issue or that my attempts at testing admittedly far from ideal, I’d have preferred to test and analyse this in a more controlled and rigorous testing situation and provided some hard data of to go with it but I have no idea how I would go about that with whatpulse as I am not aware of any published source or anything. Anyway hope that something here might be of some value.

Thanks for your pretty long research post, looks good.

[quote=“MttJocy, post:8, topic:13384”]
This indicates to me that for some reason whatpulse is not using the snaplen feature to capture the data as minimally and efficiently as possible and I am unsure why this would be the case.[/quote]

You’re probably unsure on the reason, as the client doesn’t not capture the entire packet, as you’re thinking. It just takes the header (64bytes) so it can see the packet length, IPs and ports.

I have not been able to see the connection which you are seeing with the network traffic. I’ve got an offline 2.3.1 client running as a test on a file server for a few months and the memory usage is around 20MB, as it should be.

I’m going to look into the database combination with the databases posted here.

So I thought I’d come to check if there was a solution or anything because I got reminded of the issue when I looked at my task manager, screenshot here http://gyazo.com/d1455b38a9568ebff53ec6981d4f22d6

I never have the window open so that’s not the issue. It’s always minimized to tray.

I don’t really understand what MttJocy’s research is, I’m not that technically inclined to try to interpret the results of the research. However, I did turn off the network statistics in WhatPulse so it’s not recording network stats anymore but is still using just as much ram.

There is one thing that strikes me as odd though, I’m looking at WhatPulse’s uptime menu and it says my current uptime is 14 hours, but I only turned on the computer a couple of hours ago… and my longest uptime is 1993 days long. Either I’m misunderstanding something or it’s displaying something wrong.

And lastly, database(260MB). https://drive.google.com/file/d/0BwZo1JLITjE5OHE2NjJydW9tREk/edit?usp=sharing

It’s been many months since my last reply, but I thought that maybe I’ll give another update. I don’t what the team is doing nor do I care at this point. I’ll keep running the software because I like the stats.

New screenshot. Now using 450MB of RAM: http://gyazo.com/72064d872f833800803e0d41fbc2382c

Have you tried running the latest beta client? It’s only using 26-30 MB of RAM on each of the three computers I’m running it on.

I’m not sure what timezone you’re in, but here on earth 20 days is not even 1 month, let alone ‘many months’.

Plus, if you would have looked around, in a different post where I asked you to try the latest client which fixes this issue and where you never bothered to reply (http://whatpulse.org/forums/showthread.php?tid=5132&page=2), you might not even have this problem anymore.

Thread closed, good luck.