Login Page - Create Account

Support Board


Date/Time: Wed, 14 May 2025 04:20:30 +0000



[Locked] - SC burns CPU resources on continually accessing data files even with markets closed

View Count: 1477

[2016-04-02 00:32:10]
i960 - Posts: 360
With all markets totally closed and no incoming market data at all, SC still burns CPU doing the following:

1. Needlessly tries to create files for already existing intraday/daily data files (which of course error because they already exist), but this may be a non-issue in itself and just part of the access function.
2. Needlessly reads 1 record of intraday data from the end of all open intraday/daily data files on a frequent and continual rate across every single chart open across all chartbooks.
3. Can be easily reproduced by opening a chartbook and then monitoring it with procmon (sysinternals). In some cases (namely aftermarket equities still being open), one can even *close* their chartbooks entirely and *still* see SC accessing and updating the data files.
4. It does this regardless of being connected or disconnected to one's trading service. The only solution is completely closing out the chartbooks. In the case of AH equities, simply disconnecting suffices.

This isn't just 1% CPU either; with a large amount of chartbooks/charts open one can see 8-10%+ CPU basically doing nothing useful whatsoever. While that may not seem like a big deal because one can always just close their platform when not trading, it's still going to be doing this behavior anytime it's open and in use (including slow/closed markets, etc), which includes normal trading sessions. I suspect this behavior is entirely being driven by this functionality: http://www.sierrachart.com/index.php?page=doc/doc_IntradayDataFileFormat.html#FeedSierraChartData and/or something related to multi-instances.

If the *vast* majority of people are not writing scid or dly files externally outside of SierraChart (which is most likely the case), there needs to be a way of either disabling this across the board, or making it a per chart setting. Additionally, if there's some kind of multi-instance need for doing this, then there should simply be a better way of handling it, because having processes basically continuously tail 40 bytes from the end of every single chart is far from efficient. If you're going to do the equivalent of stat()ing every single chart that's open - why not simply just check the filesize and if it hasn't varied then don't bother reading 40 bytes of data that hasn't even changed.

I've captured both a screen capture of the procmon log and even made a video showing this happening (+me disconnecting and the same thing continuing).

Video: https://www.youtube.com/watch?v=bHwm2ZmsVZc

** Remember now, the global futures markets are closed, there is no incoming market data driving the charts whatsoever.
imageScreen Shot 2016-04-01 at 1702.17.png / V - Attached On 2016-04-02 00:05:58 UTC - Size: 1.27 MB - 297 views
[2016-04-02 03:26:01]
Sierra Chart Engineering - Posts: 104368
We will look this over and improvements are coming in this area but we have never known any case where frequently reading the last record which is cached by the operating system presents any measurable or noticeable load.

You should notice the CPU usage remain at 0 percent.

Either something is not right on your system, or the CPU usage is coming from somewhere else.

Also if you are using an extremely low Chart Update Interval, that might be contribute to the problem but still we do not think this would be measurable. Certainly not 8 to 10 percent.

At this point in time, anyone else checking their CPU usage so long as they are not using studies which continuously update they are not going to notice the CPU be higher than 0 percent as long as they are not interacting with Sierra Chart.

For example running Sierra Chart on our system we notice 0% CPU usage. We have 50 charts open. What you claim to be the problem is not. It is something else on your system, like antivirus software, or with what you are doing.
Sierra Chart Support - Engineering Level

Your definitive source for support. Other responses are from users. Try to keep your questions brief and to the point. Be aware of support policy:
https://www.sierrachart.com/index.php?l=PostingInformation.php#GeneralInformation

For the most reliable, advanced, and zero cost futures order routing, *change* to the Teton service:
Sierra Chart Teton Futures Order Routing
Date Time Of Last Edit: 2016-04-02 03:32:49
[2016-04-02 03:29:50]
i960 - Posts: 360
Chart interval is 250ms, not extremely low. There's nothing wrong with the system and I get what you're saying about cached reads. But if a process is doing that to lets say 50 charts, continuously, it's doing the equivalent of stat()-ing the same 50 files over and over and over. Even if this consumes 2% CPU it's still dumb.

Add it to the list I guess.
Date Time Of Last Edit: 2016-04-02 03:31:28
[2016-04-02 03:35:22]
Sierra Chart Engineering - Posts: 104368
For example running Sierra Chart on our system we notice 0% CPU usage. We have 50 charts open. What you claim to be the problem is not. It is something else on your system, like antivirus software, or with what you are doing.

This is also not dumb. Data files have to be checked for modifications. It is completely normal when a program is idle for it still to be doing a lot of checking at frequent intervals.
Sierra Chart Support - Engineering Level

Your definitive source for support. Other responses are from users. Try to keep your questions brief and to the point. Be aware of support policy:
https://www.sierrachart.com/index.php?l=PostingInformation.php#GeneralInformation

For the most reliable, advanced, and zero cost futures order routing, *change* to the Teton service:
Sierra Chart Teton Futures Order Routing
Date Time Of Last Edit: 2016-04-02 03:40:09
[2016-04-02 03:37:33]
Sierra Chart Engineering - Posts: 104368
We now set the Chart Update Interval on our Sierra Chart to 20 milliseconds.

This is 2500 reads a second. CPU usage remains at 0 percent.

Update:
Actually we realize many of the charts we have open are daily charts so there is not this many reads a second but we would still expect the CPU usage to remain at zero percent.
Sierra Chart Support - Engineering Level

Your definitive source for support. Other responses are from users. Try to keep your questions brief and to the point. Be aware of support policy:
https://www.sierrachart.com/index.php?l=PostingInformation.php#GeneralInformation

For the most reliable, advanced, and zero cost futures order routing, *change* to the Teton service:
Sierra Chart Teton Futures Order Routing
Date Time Of Last Edit: 2016-04-02 03:38:52
[2016-04-02 03:44:59]
i960 - Posts: 360
Okay well pull up procmon from sysinternals, filter on SierraChart.exe and verify it's actually doing something continuous though. The task manager thing is just showing an aggregate usage whereas procmon shows if it's actually using syscalls. Additionally, large intraday files might also make a difference (although since an offset is used, I wouldn't expect it to be too significant). Try opening 10 intraday charts from different markets + procmon.
[2016-04-05 05:18:32]
Sierra Chart Engineering - Posts: 104368
We are looking at this again.

This is not true:
1. Needlessly tries to create files for already existing intraday/daily data files

Not sure how you conclude this. No need to respond because we know there is not a problem like this.

2. Needlessly reads 1 record of intraday data from the end of all open intraday/daily data files on a frequent and continual rate across every single chart open across all chartbooks.
This is only the case for Intraday files and this is a very efficient operation. We are certain that no users may be other than yourself would notice any CPU usage from this.

This isn't just 1% CPU either; with a large amount of chartbooks/charts open one can see 8-10%+ CPU basically doing nothing useful whatsoever.
We are certain no one is going to have this from the checking of Intraday data files.

one can even *close* their chartbooks entirely and *still* see SC accessing and updating the data files.
This is only the case if trading is occurring for a particular symbol. Otherwise this is totally false.

why not simply just check the filesize and if it hasn't varied then don't bother reading 40 bytes of data that hasn't even changed.
This does not work with Intraday Data Storage Time Units greater than 1 Tick.
Sierra Chart Support - Engineering Level

Your definitive source for support. Other responses are from users. Try to keep your questions brief and to the point. Be aware of support policy:
https://www.sierrachart.com/index.php?l=PostingInformation.php#GeneralInformation

For the most reliable, advanced, and zero cost futures order routing, *change* to the Teton service:
Sierra Chart Teton Futures Order Routing
[2016-04-23 15:18:39]
Infinite - Posts: 134
I have done similar tests and find SC to be the least CPU intensive platform. Certain studies in the past would jack the CPU usage up as it would in other platforms
[2016-04-25 17:53:14]
Sierra Chart Engineering - Posts: 104368
Of course. What is posted in this thread is not even true in regards to CPU usage.

We did not look at the video but we later looked at the screenshot and we understand why they think files are being created. When a file needs to be opened the CreateFile function has to be called. But this does not mean a file is being created.

And once the file is opened once for a chart, it remains open.
Sierra Chart Support - Engineering Level

Your definitive source for support. Other responses are from users. Try to keep your questions brief and to the point. Be aware of support policy:
https://www.sierrachart.com/index.php?l=PostingInformation.php#GeneralInformation

For the most reliable, advanced, and zero cost futures order routing, *change* to the Teton service:
Sierra Chart Teton Futures Order Routing
Date Time Of Last Edit: 2016-04-25 17:54:58
[2016-04-25 19:09:47]
User472677 - Posts: 362
CPU and memory usage are big concerns of mine and I monitor them carefully.
I can make a definitive statement that I do not see what infinite is seeing.
FURTHER SIERRA IS MORE EFFICIENT THAN ALL COMPETING PROGRAMS THAT I HAVE TRIED.

What is your experience in terms of running 100 charts in one instance compared to running 25 charts each in four instances?
[2016-04-25 19:48:30]
i960 - Posts: 360
We did not look at the video but we later looked at the screenshot and we understand why they think files are being created. When a file needs to be opened the CreateFile function has to be called. But this does not mean a file is being created.

Which is specifically why I said this:

1. Needlessly tries to create files for already existing intraday/daily data files (which of course error because they already exist), but this may be a non-issue in itself and just part of the access function.

You're also basically saying "oh this whole thing is not a problem and it doesn't contribute to CPU" even though a video has been made of SC doing exactly that. I'm not saying that it's something that monopolizes cores, I'm saying it's something unnecessary because of a hackish need for reading intraday data files from every single open chart "just in case" there's more than one instance using them. Make a configurable option per chart that's default disabled because it's vastly not the normal use case.

You realize some of your users are also professional developers too, right?
Date Time Of Last Edit: 2016-04-25 19:49:43
[2016-04-25 21:56:47]
Sierra Chart Engineering - Posts: 104368
1. This is totally untrue and we explained why in our prior post.

You are not proving anything here. Simply monitoring this in the way that you are in itself is what is causing the CPU usage. No one is going to notice any CPU usage from the files being checked for new data.

Each chart maintains its own copy of the data and they are updated independently so each one has to check the files independently.

This is a non-issue.
Sierra Chart Support - Engineering Level

Your definitive source for support. Other responses are from users. Try to keep your questions brief and to the point. Be aware of support policy:
https://www.sierrachart.com/index.php?l=PostingInformation.php#GeneralInformation

For the most reliable, advanced, and zero cost futures order routing, *change* to the Teton service:
Sierra Chart Teton Futures Order Routing
Date Time Of Last Edit: 2016-04-25 22:00:53
[2016-04-25 22:02:47]
Sierra Chart Engineering - Posts: 104368

What is your experience in terms of running 100 charts in one instance compared to running 25 charts each in four instances?
You are the one to make the best judgment. Neither of us or anyone else can make that judgment.
Sierra Chart Support - Engineering Level

Your definitive source for support. Other responses are from users. Try to keep your questions brief and to the point. Be aware of support policy:
https://www.sierrachart.com/index.php?l=PostingInformation.php#GeneralInformation

For the most reliable, advanced, and zero cost futures order routing, *change* to the Teton service:
Sierra Chart Teton Futures Order Routing
[2016-04-25 22:09:55]
Sierra Chart Engineering - Posts: 104368
We just tested 101 charts open at a 20 millisecond update and notice zero percent CPU usage.
Sierra Chart Support - Engineering Level

Your definitive source for support. Other responses are from users. Try to keep your questions brief and to the point. Be aware of support policy:
https://www.sierrachart.com/index.php?l=PostingInformation.php#GeneralInformation

For the most reliable, advanced, and zero cost futures order routing, *change* to the Teton service:
Sierra Chart Teton Futures Order Routing
Date Time Of Last Edit: 2016-04-25 22:13:16
[2016-04-25 22:30:02]
i960 - Posts: 360
1. This is totally untrue and we explained why in our prior post.

Did you not read the part that said "but this may be a non-issue and just part of the access function" ?? It's possible for a statement to be made and then a counter-balancing statement to also be attached to that sentence. Example: "it looks like this, but there's a chance it could actually be this."

Simply monitoring this in the way that you are in itself is what is causing the CPU usage

That is a ridiculous statement. Monitoring/tracing of a process has zero relation to the CPU usage of it. Anyway, I'm done with this debate because it's going nowhere.
[2016-04-26 02:59:52]
Sierra Chart Engineering - Posts: 104368

Did you not read the part that said "but this may be a non-issue and just part of the access function" ?? It's possible for a statement to be made and then a counter-balancing statement to also be attached to that sentence. Example: "it looks like this, but there's a chance it could actually be this."
We did not fully understand what you meant by this. We understand now.

Monitoring/tracing of a process has zero relation to the CPU usage of it.
Obviously in this case it does. And certainly the process of debugging/monitoring does introduce additional processing. We have plenty of experience with that.

It is pointless to go through an elaborate effort to avoid periodically checking files that are currently open when there is a non-issue here. If we hear from other users, about a problem related to this and that is definitively confirmed, then we will look at it. But look at our prior posting with the extreme test we performed and no impact.

And we do appreciate the interest in trying to make Sierra Chart better, but we do not think this is approached properly.

This thread is now locked. It is unnecessarily taking our time.
Sierra Chart Support - Engineering Level

Your definitive source for support. Other responses are from users. Try to keep your questions brief and to the point. Be aware of support policy:
https://www.sierrachart.com/index.php?l=PostingInformation.php#GeneralInformation

For the most reliable, advanced, and zero cost futures order routing, *change* to the Teton service:
Sierra Chart Teton Futures Order Routing
Date Time Of Last Edit: 2016-04-26 04:50:35

To post a message in this thread, you need to log in with your Sierra Chart account:

Login

Login Page - Create Account