Login Page - Create Account

Support Board

Date/Time: Thu, 07 Dec 2023 03:13:46 +0000

10x faster than 4 Instances !!!

View Count: 3706

[2022-10-04 16:36:03]
Dorian - Posts: 57
We can run Sierra Chart software 3x for free, and up to 5x by paying a few dollars.

We tested the difference between the CME Group Instances via TETON and DENALI, and it's just crazy the difference in speed, we are in real time from the opening of 8:30 CT unlike the Instances which make lags, it's to say that with Instances we are not in real time and we cannot trade for a few minutes while waiting for the time of the charts to be synchronized with the local time. Generally you have to wait 8:35 CT.

I invite you to test for yourself. So we no longer use the Instances, but several times the software on the same computer.

With an Intel Core i7-11700K which remains an 8-core processor, and a very simple Chipset like the B560 Chipset, a ton of Studies with 6 to 7 graphics per software, Sierra Chart held the load without any problem. The Nasdaq was a rocket ship just like the S&P 500.

It's so crazy that I share it here.
[2022-10-04 21:28:27]
John - SC Support - Posts: 27546
Just be aware of the limitations related to the Denali data for separate installations. Refer to the following:
Denali Exchange Data Feed: Connections
For the most reliable, advanced, and zero cost futures order routing, use the Teton service:
Sierra Chart Teton Futures Order Routing
[2022-10-05 00:09:46]
User90125 - Posts: 715
@Dorian, so basically don't use the instances but open another copy of SC instead on the same computer.

This presumes that you are using a fast SSD with enough space for all the data that will be required for each copy, correct?
[2022-10-05 12:07:02]
Dorian - Posts: 57
Here is the video of the problem. This happens every News and every 8:30 CT open. https://vimeo.com/757148042/84c2f92698 (Paris time, UTC+2)

Indeed we can only launch 3x Sierra Chart, after that blocks, even by subscribing to 4/5 instead of 3/5.

SC Data - All Services | Logon error received from server: Connection limit exceeded. | MaxConnectionsForSameDevice = 3, NumCurrentConnectionsForSameDevice = 4 | 2022-10-05 12:10:09.293

Not all that obvious.

We notice that the Instances are synchronized between them, even the DOM has problems, even if I only have the DOM and an empty chart to manage the position in the Chartbook and in the main Sierra Chart. I don't understand why the Instances are synchronized with each other, and even blocks the DOM that is in the main software.

I only use Studies native to the software.
Date Time Of Last Edit: 2022-10-05 12:25:22
image2022-10-05_14-22-59.png / V - Attached On 2022-10-05 12:23:35 UTC - Size: 14.07 KB - 350 views
image2022-10-05_14-25-04.png / V - Attached On 2022-10-05 12:25:18 UTC - Size: 20.02 KB - 264 views
[2022-10-05 12:32:37]
Dorian - Posts: 57
@User90125 It's exact.

Whether you use Instances or multiple installations of the main Sierra Chart, the storage size will be identical for both, because you are replacing the Chartbooks of the Instances with a single Chartbook for each main Sierra Chart.

Remember that in each Instance folder there will be a Data folder with the .scid data. That's why the size will be the same as well.

We use a method to make this fit on a 100GB Intel Optane SSD P4801X, the fastest SSDs in the world and the one with the lowest latency in the world. On the motherboard, it must also be connected to the M.2 port which is managed by the CPU, and not by the Chipset, it is even faster.

For the first installation, we display only one symbol, the one we are trading. So there will only be one .scid file.

For the second installation, we display only the S&P 500. It will only download S&P 500 data in .scid.

For the third installation, we display only the Nasdaq. It will only download Nasdaq data in .scid

We trade the Nasdaq, but we watch the S&P 500, that's why these 2 symbols have to be real time and fast for us.

For the fourth installation, we use it for Intermarché relations. So the Data folder of this installation will be heavy, because we display several symbols. And while it might flounder, that's okay, because we're not looking at the charts all the time on this one. But it does not even row.

This is quite smart, as the CPU and SSD will have less work to do.

Over 3 months of use, this will use 50-70 GB.

During new contracts (rollover), we delete all old contracts. In fact we completely delete all the .scids from all the Data folders to start from scratch, with just the new contracts. It's cleaner. The new contracts are only a few MB of storage space, because they are recent, so downloading data will not take long. We have settings that allow it, because if he downloads the old contracts again, it wouldn't work. In Data Trade Service Settings, instead of 180 days, put 1.

Be careful to also delete the data from the MarketDepthData folder if you use the Heatmap.
Date Time Of Last Edit: 2022-10-05 12:36:58
[2022-10-05 13:50:04]
Dorian - Posts: 57
The difference is crazy. I'm happy to use Sierra Chart when I see the speed and beauty of it all.

Nothing to do with Instances. You can launch 3x Sierra Chart. Look at this how crazy.


You don't even need an Intel Xeon 18 cores lol.
[2022-11-11 03:13:58]
User379468 - Posts: 508
I thought the entire idea of multiple instances is to get that multicore performance you're seeing, while making the file/data/settings management slightly easier.

Why are separate installations working better for you?
[2022-11-13 20:40:57]
Rui S - Posts: 161

Thank you for sharing this information.

I am a bit confused though - what you are saying is that it is better to only use several separate SC single installations (or instances) instead of using Sub-instances, which are instances within the main instance?

Is my interpretation correct?
[2022-11-13 23:40:02]
User823591 - Posts: 14
Rui S,

Just wanted to chime in to regards of separate installations of Sierra Chart.
I have been doing this for many years .... mostly to keep things organized on each monitor screen.
I saw your postings in the Denali Data Burst thread .... Denali data burst

Just wanted to let you know that I see the same lags that you mention in that thread.
Go and check my thread where I inquired about the DOM speed .... Trade DOM update interval

Before I continue, with the recent upgrades that Sierra has done to the Denali Feed servers its much improved.
Like yourself I've done everything documented in the Remote Buffer settings .... However it still lacks fluidity when I compare it side by side with my IQFeed instances.

What I mean by fluidity is that you can see every TICK movement as Price moves ... this is very noticeable during the US open, closing minutes & of course news events.

For reference I have a total of (4) Sierra Chart installations .... (2) for IQFeed & (2) for the Denali Feed.
One of the Denali installations is only for the DOM & the other installation is to handle anything else.
[2022-11-14 12:00:43]
Rui S - Posts: 161

Thank you for your clarification and information.

I had already seen your other thread, but thank you for the link anyway.

My SC setup is quite similar to yours, in order to spread the data load across the different instances.

I completely understand and agree with what you are saying about the lack of "fluidity" as I have and feel the exact same problem.

Unlike you, I tried and sacrificed DOM's performance regarding the update interval and I have it set to 100ms but the problem still persists.

Furthermore, as I don't really need market depth, I simply don't use it. That alone should contribute a lot to minimize the work load.

On the other hand, I don't have a second data feed as good as yours (IQ). I simply have the "horrific" IB data feed. The reality shows however that at those critical moments (market open, etc.) the DOM fluidity is much better with IB's data feed and it never freezes...!!

I feel a bit trapped now, because I moved my day trading account to Stage 5 / Teton / Denali and I would really want to stay this way!

As a scalper, I mostly trade the first hour after the opening, exactly the time when this issues happen. I must confess this has been a major problem for me.

But as we know, SC Engineering is working hard to find the best solution.

Thus, I am very hopeful that soon Denali will be working at a 100% performance, will regain our trust and we won't need to have and pay for secondary data feeds.
Date Time Of Last Edit: 2022-11-14 12:03:04
[2022-11-14 18:58:17]
Dorian - Posts: 57
Imagine I installed Sierra Chart, then copied this folder 2x to have 3. Then I run all 3 at the same time with Teton/Denali routing. So I have 3 open Chartbooks. 1 Chartbook per Sierra Chart Software.

The problem is that Instances don't even use 10% of the CPU core. I put each instance in a different core of the processor, it does not change anything, and it continues to be slow with lags. While the main install takes my Chartbook 60-80% of a core. Instances for the same Chartbook are stuck at 10% and it's lagging. It's weird, but true. Despite an Intel Optane SSD connected directly to the CPU and not the Chipset. So despite the best computer components and the correct connections, the instances continue to row. I also have a Livebox 6 in 2Gbps Fiber as a box. The motherboard only manages 1Gbps, but to say that on the computer and internet side I'm on top. The internet box is only used for 1 single computer.

So the main installation of Sierra Chart fully utilizes the CPU core unlike Instances, so it's more efficient. But it really needs a processor of the highest core frequency. Never look at the Turbo frequency, it is not used. For example, the base frequency of an i7-11700K is 3.6 GHz. The frequency in turbo mode is 4.6 GHz and will not be used. The i7-11700K (3.6 GHz base) is therefore more efficient than an i9-11900K (3.5 GHz base). A processor with 6 or 8 cores is sufficient, since the software is not multi-core. I even saw it switch between cores for the main install. So it really only uses one core.

Indeed we can put different hearts for each Instance, but it is as if it does not manage it. Instances are lagging and the CPU core is used at only a few percentage points.

For each main Sierra Chart launch, I place them in a different core. I explained in this video how to do it. It's not that complicated, but as I'm French, the video is not in English.

Vidéo (3h19) https://trading-order-flow.fr/optimisation-sierra-chart/

A 6-core processor is therefore sufficient. The most important thing is to favor the frequency of the processor, and not the number of cores. An 8-core at 3.6 GHz is better than an 18-core at 3.0 GHz. I showed all the tests in this video.

In the video, I explain the trick to create a shortcut of the software which makes the software will be placed for example in the hearts number 6 and 7 as soon as it is launched. And another launch in cores 4 and 5. And another launch in cores 2 and 3. It saves time. I don't use core 0 and 1 because those are cores used by Windows generally.

Be careful to place Sierra Chart on P-core cores, not E-core, for 12th and 13th generation processors. I recommend disabling these efficient cores in the BIOS. I also disable hyper threading in the BIOS. I also recommend using an Intel Xeon processor to avoid computer crashes sometimes.

If you are lost on the choice of a processor, the quotes are here https://trading-order-flow.fr/workstation

Another thing, I noticed that by removing the DOM, the Nasdaq no longer had a lag when the markets opened. On the other hand the ES has no lag even with an open DOM, because it is slower.

In any case, a processor with 6 or 8 cores is sufficient with all the tricks explained here, but try to have a base frequency higher than 3 GHz. Never look at the Turbo frequency. It will never be used. This is used for renders or video exports when cores are fired at 100% for several seconds.

Also take the processor with the highest TDP. For entry-level Intel processors, the highest TDP is 125W. For high-end Intel processors, the highest TDP is 165W. The advantage of high-end processors is that it handles more PCIe lanes, so you can put multiple SSDs at 4GB/s speed in PCIe 3.0. The SSDs are in U.2 and M.2 format. On the other hand, for entry-level processors, there is only the M.2 format and generally you can only put 2 of them on the motherboard, and there will only be one that will be managed by the CPU, the other will be managed by the Chipset.

Now Intel hasn't updated these high-end processors since 2020. I recommend waiting until 2023 to build a computer with high-end processors, just to keep it for 10 years. Normally the new high-end processors should be released this spring of 2023. No need to take a high-end processor with a lot of heart, because we have seen that it is useless. But a high-end CPU has PCIe line-side advantages and higher TDP.

To respect the TDP, the more cores there are on the processor, the more the base frequency of each core will be reduced. So take a high TPD with the highest base frequency. And at least 8 cores. As I use 3 main installations, I use 3 cores, but I put 2 cores per installation for security. So 6 cores is enough. But you also have to take Windows software into account, so an 8 core is perfect for me, and it also allows me to make video recordings using core 0 and 1 for the screen capture software.

You can also use Process Lasso software to manage cores. I explained it in the video. If you create Sierra Chart shortcuts that automatically place in heart numbers 2 and 3, 4 and 5, 6 and 7, there is no need for Process Lasso. This will avoid getting lost with another software.

If you are running 4 Sierra Chart installs on an 8 core processor, put only 1 core per Sierra Chart install. Keep the other 4 for your other software. Or in 2023 you take a 10 or 12 core processor to put 2 cores per installation of Sierra Chart. Wait for Intel to release these new high-end processors.

CAUTION: For processors with a TDP equal to or greater than 125 W, you must install a Noctua cooler. This will lower the temperature to 80 degrees. If you don't use a good cooler, the processor will throttle its frequency when it reaches the temperature of 100 degrees.
This cooler is more than enough, the processor never exceeds 80 degrees: Noctua NH-U9S
For perfectionists who want the best cooler in the world: Noctua NH-U12A

If you do not look at the liquidity of the DOM, you can simply subscribe to the Top-of-book, you pay $1.25 instead of $11. I explained it at the end of the video. This can make the Sierra Chart software even faster. See attached image.
image2022-11-14_19-34-29.png / V - Attached On 2022-11-14 18:39:01 UTC - Size: 57.96 KB - 249 views
[2022-11-14 20:35:00]
Rui S - Posts: 161

Thank you very much for the detailed explanation. It's very useful information.

Some things I already knew but there are other things that I didn't know and I will surely give it a try.

I'm going to watch your video for sure too. I hope my French is good enough to understand at least most of it.

Thanks again.
[2023-06-19 07:29:38]
User880238 - Posts: 3
hey fellas im new to SC and just found this thread I was wondering if they ever straigtened this out before I commit to a six month license, it would be a shame if it went unresolved for so long knowing the development teams impressive track record
[2023-09-30 13:12:14]
GravisHTG - Posts: 303
I am running on AMD 5XXX series here and its running perfectly

Graphic is 3060 and I always run Subinstance - The goal is to seperate your Heavy Lifting study in seprated instance , like that you will no feel lag, Unless the study itself lag the instance.
[2023-10-23 05:11:38]
wcc118 - Posts: 3
This is an incredibly helpful and throughtful thread that helped me solve my lag challenges but not in the way I expected. I discovered that my cpu cooling was insufficient and throttling the available processing power to 80 Mhz down from 5.2 Ghz under load spikes which occur at exactly the wrong times during RTH. What happens is the core processor reaches 100C resulting in laggy behavior. Upgrading the fan (25$ amazon purchase) immediately solved the lag. To observe this behavior on your rig, use hardware monitoring software to see your CPU temp and available processing power as you navigate through different activities on your PC. While your milage may vary, I'd never thought my previous fan was the source of my lag during peak data demands with only Sierra Chart open and all other available documentation observed and implemented.
[2023-10-23 10:20:32]
Calculus - Posts: 75

If you're suffering from very high CPU temps that $25 Amazon special might not be a good idea for the long term.

As you say it's solved the problem but again, it might not be good enough. If you're using a standard PC case then consider the Noctua NH D15 as it's the Gold standard of CPU fan coolers. But it's very big so make sure your case is large enough. BeQuiet coolers are also very good. Hope this helps.
[2023-10-23 16:50:31]
wcc118 - Posts: 3
Calculus - absolutely. My solution for now is more than acceptable and now that I have isolated my challenge I can optimize if it presents as still insufficient. My intent was to point out another variable for folks to look at beyond the existing documentation and common refrains around hardware. For the dedicated, like myself, I have spent countless hours at this point trying to get my system to an optimum position and I am relieved to be there, and if not, at least very close. Having a position on when your system lets you down is disheartening.
[2023-10-23 18:22:25]
Dorian - Posts: 57
If you have an Intel Core K, or Intel Xeon X processor, you can also push the base frequency of all cores to Turbo frequency with Intel's XTU software.

For Sierra Chart to work correctly with complex Studies, the processor must have a score greater than 3500 in Single Thread.
You can also limit the liquidity level of the DOM. For the ES, I limited it to 15, for the NQ, I limited it to 1.

As a motherboard, you need a Z series chipset.

The Noctua NH-D15 cooler is great (165 mm), but if your case is not very large, you can take a Noctua NH-U12A (158mm).

You can test the compatibility of the coolers on your motherboard here https://ncc.noctua.at/motherboards

You should always disable hyper-threading and E-cores.
Date Time Of Last Edit: 2023-10-23 18:43:40
[2023-10-24 20:23:01]
User61168 - Posts: 220
my 2 cents... I have always failed to realize any "value" with creating a new instance of SC (via File > new instance) or to worry about performance issues or data files sizes etc

I keep it simple by having have 3 main installations of SC without use of study collections.
1) for live only trading with only ONE day of data. Only fully tested chartbooks move to this install. Zero SIM accounts to keep it clean from building logs etc
2) second is a exact clone of #1 and used only for forward testing
3) Development and market replays only and to apply SC Fast updates etc

Migration path is 3 > 2 > 1. I might merge 2 and 3 into just one instance and simplify things further :-)

To post a message in this thread, you need to log in with your Sierra Chart account:


Login Page - Create Account