Login Page - Create Account

Support Board


Date/Time: Fri, 29 Mar 2024 10:32:01 +0000



[User Discussion] - Using Visual Studio C++ to create Custom Studies

View Count: 7292

[2021-12-15 17:38:08]
User99735 - Posts: 234
Sometimes the issue with dll's not being recognized is due to "VC++ redistributable" not being installed on the system. Check out https://docs.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170
[2021-12-15 23:00:17]
WarEagle - Posts: 67
I had him download and install the redistributables but that did not change anything (great idea though, makes sense that it could be the cause).

However, I decided to also try and build the dll in Release instead of Debug. The errors I was getting turned out to be related to having the Release configuration not matching the Debug configuration. Once I realized this I followed your template and made sure everything matched. At that point the Release compiled and as you say it is MUCH faster!

I sent this updated version of the dll to my friend and it worked! So there must have been an issue in having him use the Debug version, which worked fine on my computer where it was built. Anyway, thank you, thank you, thank you!

I appreciate your help!
[2021-12-19 12:47:35]
Bet_More_Tim - Posts: 21
Hello, I wanted to ask some questions for clarity based on what User310645 posted about their process


I've attached a picture here where I've written notes next to their code, as well as how I assume it's to be used in a final acsil cpp file.

So I think most of it is clear, however on 2. I'm not entirely sure what exactly is being done here, and also then relating to the potential usage in an acsil file of that line. the "Study(&sc) {}" is this what goes in the acsil cpp file after SCSFExport ? What confuses me further is it seems another pointer is being passed into the "SetReferences()" function, at which point it seems like there are two pointer references going on?

Another thing I wonder is if the subgraphs can be set from the study class file the same way the user inputs are set, and then assign a value to subgraph as needed referencing the variable we named it in the study class, or would it be more proper to set the subgraph values from within the acsil cpp file 'nice and cleanly also'

As far as doing calculations with data, I'm assuming that the idea is to pass the sc.Array of the data you want as an argument to one of your study class functions... and this ties into the previous question as well, since would it be more proper to set subgraphs from within the class function itself, or would it be more proper to return a value from the class function and just set the subgraph value to that function call.

I'm assuming we would not want to bring actual data out of sierra to "keep" longer than needed, which is my reasoning for a preference being to pass the sc.Array as an argument rather than having separate variables set to inherit those values... but then I have the question, does c++ automatically clear any memory that was used during the calculation of a function, or is there still a need to clear memory in some way.

I guess then the idea is to compile your class file as a .dll and drop it in acs_source folder, and afterwards compile your acsil cpp file.... or since the idea is working in visual studio building solutions, with our "including add dir" and dest. folder set, we just "build solution" all at once, and both .dll (study class file & acsil implementation of study) will get built and dropped where we want.

Thanks
imageMaking_Sense_of_cpp.jpg / V - Attached On 2021-12-19 11:50:23 UTC - Size: 781.89 KB - 411 views
[2021-12-20 09:32:31]
User310645 - Posts: 49
In gerneral terms all this is doing is wrapping the lifecyle of calling a study into a C++ wrapper. The main entry point into the study is a normal SCSFExport function. I've attached the SCEntry.cpp below. (I didn't include this as I was replying to the discussion specifically around creating UI components and didn't want to add extra complexity.)

This calls into a static Study base class that is responsible for creating the required component, lifecycle management (eg. SetDefaults etc), initialisation, running the calculation part of the study (DoStudy() ), storing/retrieving the pointer to the component (you need to persist the UI component somewhere between calls) and cleaning up on sc.LastcalltoFunction.

For example:


void Study::Run() {

  if (!Initialised) {
    Init();
  }

  if (_sc->SetDefaults) {
    DoSetDefaults();
    return;
  }

  if (_sc->LastCallToFunction) {
    CleanUp();
    return;
  }

  DoStudy();
}


Study* Study::StartStudy(StudyType StudyType_, SCStudyInterfaceRef sc_) {

  const bool IsDLLInit = sc_.SetDefaults && sc_.ArraySize == 0;

  Study* study = IsDLLInit ? NULL : static_cast<Study*>(sc_.GetPersistentPointer(1));

  if (study == NULL) {
    switch (StudyType_) {
    case STUDY_COMBO:
      study = new ComboStudy(sc_);
      break;
    case STUDY_SLIDER:
      study = new SliderStudy(sc_);
      break;
    case STUDY_THEME:
      study = new ThemeStudy(sc_);
      break;
    case STUDY_POWERMETER:
      study = new PowerMeterStudy(sc_);
      break;
    }
    sc_.SetPersistentPointer(1, study);
  }

  study->SetReferences(&sc_);

  study->Run();

  if (sc_.LastCallToFunction || IsDLLInit) {
    delete study;
    study = NULL;
    sc_.SetPersistentPointer(1, study);
  }
  return study;
}


All the files are compiled together into a single DLL (using Visual Studio) and placed in the data folder just like any other study would be. You could also create libraries of course and link them as required.

Your specific points;

For 2. The initial SCStudyInterfaceRef is passed in the constructor for initialisation. Unfortunately you cannot just store this for the duration of the study as you get a new reference each time Sierra calls into it. Therefore the SetReferences() call is made to update the reference on each call and set any variables that may have changed.

You can perform any operation to subgraphs in exactly the same way. You would just do it in the DoStudy() part.

All the data from the arrays are always in the SCStudyInterfaceRef object. You are accessing them via a reference (or a pointer). If you choose to copy anything then of course it your responsibility to clean it up.

You could write a book on C++ memory management but in general terms anything you create on the heap ie. with "new" (but read about smart pointers) you will have to clean up yourself. Objects used in a function that are created on the stack are cleaned up when the function exits.
Date Time Of Last Edit: 2021-12-20 09:36:25
attachmentSCEntry.cpp - Attached On 2021-12-20 09:09:04 UTC - Size: 977 B - 253 views
[2021-12-21 07:42:21]
Bet_More_Tim - Posts: 21
Dude thank you so much! This helps a lot. This is really freakin awesome.

Just to make sure I got this down pat... I posted a picture

I think I get it now for the most part.

Is that first code block part of the study.h file defining the function, or part of the implementation,
as in, possibly, have you fully defined Study::Run() function in the header file, and only provided a prototype for the Study::StartStudy(StudyType StudyType_, SCStudyInterfaceRef sc_) in the header, and then using the Study.lib to implement both(and to only #include study class header files in study.lib), and then in the acsil entry point we only need to #include "study.h" which brings everything it needs?


Another question are the two 'Study*' references.... are those simply existing as class pointers to represent the current study thats being controlled, as in, the studytype returned by the Study::StartStudy() function, such that in the next line we use it where we're checking IsDLLInit and we use 'Study* study'
......I might have figured it out as im typing maybe... This is the persistent instance that exists so things don't keep getting created. and we are either casting the current Study* class pointer to its persistent pointer its been previously assigned (if the chart is not empty and has data)... or were leaving it null until the next step where we then create a new persistent pointer. So that on the first run through we set a newly loaded study equal to a persistent pointer, at which point SetReferences() will get ran, along with Run() which will only initialize things before going back to the top of the StartStudy() function, at which point this time the bool will be false, we don't need to create a new instance of the study class object, and we simply need to cast the pointer to what it was and get on with life.

I guess what I'm not entirely sure of there is, regardless of which study entryway was triggered, its running thru the startstudy function, and getting assigned the same persistent pointer, just to be ran one time, and then every single time the function gets called its re-creating that persistent pointer? I guess this question stems in my lack of knowing exactly how sierrachart calls the "SCSFExport scsf_name(SCStudyInterfaceRef sc_) " function, because if its being called "non stop" in order to calculate a study, thats a lot... however if upon adding it to a chart, it runs that function once, which basically "now it exists and is running", that would coincide with basically "living" in our Study::Run() most of the time, and then when the chart is closed or study deleted, delete the memory.
And along with that, since we can have multiple studies in a dll, will every addition of a study from that dll to a chart then be creating its own instance of being controlled by the study controller, or are all the studies that are in the same dll, while attached to a chart somewhere "fighting for"/"waiting in line" to get used by the controller? I'd imagine these are separate instances since it's entirely conceivable to have the same several studies on several different charts. I suppose this question stems from my lack of knowledge of how sierrachart deals with threading processes?, and whether that would lead to 'bumping elbows' with shared controllers. As if the resources are separately assigned based on the actual instance. (not going to lie I dont even know if this line of questioning even makes sense, but gotta sound stupid to learn i guess lol)


Another thing is in these two lines 'study->SetReferences(&sc_);' and 'study->Run();'... i get the study->SetReferences(&sc); being that the class that 'study' is pointing to is the class where SetReferences lives, but what I'm not sure about is also having 'study->Run(); ... is this because there exists an 'object' or 'class' reference inside the study::Run() function and we want the class object we're pointing to to be the one "used" during?


Edit: I might have figured this last bit out actually.... it's because our study class constructor is a delegating constructor which passes is responsibilities off to the Study() class, and thats why we need to point to the Study::Run() with the class pointer we assigned to the proper study class, such as:
DialogForm::DialogForm(SCStudyInterfaceRef sc) : Study(&sc) {}
where an object created by DialogForm() has its responsibilities delegated to Study(), and we're giving Study the reference pointer to use for SCStudyInterfaceRef class



Is the study controller class lib is going to be where we #include all of our study classes, since the controller is interacting with the models... and then in the acsil entryway, we only need to #include our header file for the controller class along with the #pragma comment(lib, "comctl32.lib") line, which will automatically create the links needed to the static libraries used?


Then from there, the big difference between static and dynamic library linking, is going to be:
-static link brings the library into the dll being compiled, and its all there (as in our acsil dll to sc/data folder)
-dynamic link means we also need to place all linked .dlls in the sc/data folder, but other than that (and some preprocessor directives for exporting functions) we'd be able to implement dynamic libs the same as static libs?



Besides this being a way cleaner method of developing in sierra, the main reason I need to learn to implement things this way is so I can have the functionality of tensorflow c++, and load trained models into a study.

I've also been thinking through what else is actually possible, since it would seem that "everything" is. To that regard I've been thinking through an "on the fly" implementation of a random forest decision tree, or ada/xg boost, to basically load up a chart, have classes that build features, and then splice the sc.Index into train/val, and do it real quick (since these models can train relatively quickly usually), and then auto-reload the chart with the trained model now making predictions in real time (or rather, onbarclose so the "full" feature data is being sent thru the model. And even be able to create a condition such that if the model starts getting too 'off' by some metric for long enough, then it auto splices our sc.Index and retrain/vals and implements the newly trained/updated model.

Similarly, I'm also wanting to implement tensorflow sts models this way, as I'm assuming it should be just as possible as everything else... to have a model structure thats been tested in an external environment, code that sts model into a tf c++ version, and then use recent data in the chart to build prior distributions, and 'every so often' implement MCMC to 'forecast ahead' a few more periods... or rather, really based on the amount of time it takes for MCMC to be implemented, 'on each new bar' run the updated priors to forecast out several bars forward and plot something like E[mid] and ~90% $range confidence intervals.

Again thank you so much for your patience and tolerating my long winded questions lol. I kind of assume I'm asking dumb things due to c++ being "as needed" and not my "daily driver", but I also hugely assume lots of other folks will at some point be searching for similar answers.
Date Time Of Last Edit: 2021-12-21 10:21:10
imagec++ dev.png / V - Attached On 2021-12-21 07:13:14 UTC - Size: 431.21 KB - 385 views
[2021-12-21 11:40:30]
User310645 - Posts: 49
Your picture looks good. But just to clarify each study you add to the chart can only control one of the GUI implementations. All this wrapper code is because they all share common code.

>>Is that first code block part of the study.h file defining the function, or part of the implementation,
Study is the base class so each implementation (Slider, Combo etc) extends this. DoInit(), DoStudy() etc are virtual methods in the Study base class so Study::Run() is calling the actual immplentations eg. Slider::DoInit(). ie. Its just polymorphism. We store the Study* so don't care what the actual implementation type is.

Study::StartStudy is static factory method to create/retrive/run the study implementaion when sierra calls in. study.h/.cpp will pull in the .h files for each of the study type implementations.

>>Another question are the two 'Study*' references....
We are simply creating the Study* if none already exists and cannot be retrieved from sierra sc.GetPersistantPointer(). Otherwise you would need to recreate these each time sierra calls in (which for a GUI object is obviously not what you want).

Sierra studies are normally stateless. Each chart update calls into SCSFExport function and you normally perform whatever calculation you have at the current index. The wrapper just serves to make it more C++ like and keep a persistant object rather than recreating it each time.

>>I guess what I'm not entirely sure of there is, regardless of which study entryway was triggered, its running thru the startstudy function, and getting assigned the same persistent pointer, just to be ran one time,
Again this is just down to sharing code. Although they all call the same code, as far as Sierra is concenerned there are 4 SCFExport entries which means 4 different studies.

>>exactly how sierrachart calls the "SCSFExport scsf_name(SCStudyInterfaceRef sc_) " function, because if its being called "non stop" in order to calculate a study, thats a lot...
yes its called a lot - whenever data changes (within the bounds of its update cycle)

>>And along with that, since we can have multiple studies in a dll, will every addition of a study from that dll to a chart then be creating its own instance of being controlled by the study controller,
Yes each time you add a study to the chart they get their own instance. However, you need to be mindful of any shared memory within the DLL. eg. static variables that only have one copy within the DLL

>>how sierrachart deals with threading processes?
For updating charts Sierra is single threaded so each study gets one call on each chart update cycle

>>Another thing is in these two lines 'study->SetReferences(&sc_);' and 'study->Run();'.
see above

>>we'd be able to implement dynamic libs the same as static libs?
yes, its a similar method but if you are not sharing the code beyond a single study you might as well just compile it all togther. ie. a single visual studio solution containing a dll project and all the individual source header files.
If you are talking about external DLLs where you don't have the .lib then you will need load it at runtime and call the exported methods via LoadLibrary() and GetProcAddress().

>>Besides this being a way cleaner method of developing in sierra, the main reason I need to learn to implement things this way is so I can have the functionality of tensorflow c++, and load trained models into a study.
>>I've also been thinking through what else is actually possible, since it would seem that "everything" is
yes you can link in any external library in the same way as if you were creating a stand alone application

I've never used tensorflow but I would imagine you would need to keep the tensor flow "model" as a static/persistant varaible in much the same way as the UI components above as you probably don't want to be creating it every call (although maybe you can, in which case a lot of this is unnecessary :-))
On each call into the study then test your model with the updated data.
Extract the result from the model and plot into a subgraph.

Feel free to PM me as this is getting off topic from the OP.

Cheers

To post a message in this thread, you need to log in with your Sierra Chart account:

Login

Login Page - Create Account