@ts3d-hoops/web-viewer-demo API
-
@ts3d-hoops/web-viewer-demo API HOOPS Visualize Web package
This is the workspace for web components of Communicator.
The workspace has been generated and is managed by Nx.
🚧 This workspace still under construction. Things may change for a better dev experience.Projects description
Each project has its own folder either in
packagesorappsdirectories:packagescommon: regroup most basic types.sc-engine: shell for the wasm module (stream client and rendering engine).streamcache: interface for the wasm module (stream client and rendering engine).ui-kit: regroup very basic pure ui elements like a button or a tree.web-viewer: the entry point of the web viewer sdk.web-viewer-components: regroup high level ui elements to interact with a web viewer, like a model tree.web-viewer-monolith: the monolithic version of the web-viewer (wasm is directly embeded, no extra download at runtime).appsdemo: the new demo app, usingui-kitandweb-viewer-components.sc-spawn-service: a web library to interface the spawn feature of the node_server.ui-samples: a set of samples of individual feature of the web viewer.web-viewer-demo: a web app demo with a minimal ui. Known as the old demo.
Getting started
Install dependencies:
npm install
Build wasm:
Now the wasm build is not managed from this package. The build need to be manually triggered when needed.
You can use the old command from the old code base in
applications/client:# With native emscripten installation npm run build-sc # With docker npm run build-sc-docker
Then, copy of the required files is performed automatically in
tools/scripts/pre-build.mjs.⚠️ Use of the debug version of wasm will require some customization.Add models:
Everything in
publicfolder will be served by the development server, so you can for example create foldersmodels/defaultinapps/demo/publicand add some models like microengine.scs or arboleda.scs.Same in
apps/web-viewer-demo/public.To dev:
npx nx run demo:serve
Will build and serve the demo app on http://127.0.0.1:4200/.
To see the microengine for example: http://127.0.0.1:4200?scs=models/default/microengine.scs.
Any change in the code will trigger a rebuild.
Tests
For more information about how tests work and how to write them, see the testing guide.
Web Viewer
Unit tests
Most of the tests are done using storybook, the reason for this is that the wasm cannot be loaded in a unit test harness because of the limitations of the virtual dom environment. Therefore whenever tests need the actual engine to run, we use storybook to play them in a browser (headless browser on CI). This comes to the cost of performance. Since the browser, the wasm and storybook itself takes some time to mount the environment, tests usually takes several seconds to run where a unit test wold span a few milliseconds. To reduce the time per test and improve developer experience by allowing devs to run tests locally using this command:
npx nx test web-viewer --watch
The watch argument is optional but allow the test runner to watch the files of the project and wait for changes. When changes occur the test runner reruns the tests that are related to the change files. This way users can’t check their code continuously without having to either check the browser through the app or storybook, or launch time consuming tests.
Most of the tests on storybook could be moved to unit tests so our tests pyramid would improve and efficiency and stability would benefit it.Turn Functional tests into Unit tests
The first step of turning a functional test into an unit test is to find an appropriate function. An appropriate function is a function that contains intrinsic algorithmic value. This means that the function is not simply API calls on the engine. It is a function that contains logic itself. Example:
// ModelStructure.ts // Not much algorithmic value to test private getAllBimInfos(nodeId: RuntimeNodeId): BimObject[] { // Not interesting for unit test const contextNode = this.lookupAnyTreeNode(nodeId); if (contextNode === null) { return []; } const inclusionContext = towardInclusionContext(contextNode); return inclusionContext.getBimInfos(); } // More algorithmic value to test public getBimIdRelationshipTypes( contextNodeId: RuntimeNodeId, nodeId: BimId, ): RelationshipInfo[] { const contextNode = this.lookupAnyTreeNode(contextNodeId); if (contextNode === null) { return []; } const inclusionContext: InclusionContext = towardInclusionContext(contextNode); const relationships = inclusionContext.getRelationships(); const relationMap = new Map<RelationshipType, { relateds: Set<BimId>; relatings: BimId[] }>(); for (const iterRel of relationships) { const relations = ( relationMap.has(iterRel.type) ? relationMap.get(iterRel.type) : { relateds: new Set<BimId>(), relatings: [] } ) as { relateds: Set<BimId>; relatings: BimId[] }; if (iterRel.relating?.relationElt.id === nodeId) { iterRel.related?.relationships.forEach((current) => relations.relateds.add(current.id)); } else if (iterRel.related !== null && iterRel.relating !== null) { iterRel.related.relationships .filter((current) => current.id === nodeId) .forEach(() => relations.relatings.push(iterRel.relating!.relationElt.id)); } relationMap.set(iterRel.type, relations); } const outBimInfo = Array.from(relationMap.entries()).map((entry) => { return { type: entry[0], relateds: Array.from(entry[1].relateds), relatings: Array.from(entry[1].relatings), }; }); return outBimInfo; }
Now let’s focus on how to turn
getBimIdRelationshipTypesinto a unit testable function. If the function does not contain too many things to mock we could mock the necessary objects and APIs and avoid too many tweaking (since interface have been made to avoid the necessity of concrete objects). Let’s take an example:// Model.ts public getBimIdConnectedElements( node: NodeId, bimId: BimId, type: RelationshipType, ): { relateds: BimId[]; relatings: BimId[] } { const infoRelation = this._modelStructure.getBimIdRelationshipTypes(node, bimId); for (const iter of infoRelation) { if (iter.type === type) { return { relateds: iter.relateds, relatings: iter.relatings }; } } return { relateds: [], relatings: [] }; } // Model.test.ts describe('Model', () => { let model!: Model; const modelStructure = { getBimIdRelationshipTypes: vi.fn((/* args */) => { // Return some hardcoded data }), }; const engine = {}; const cbMgr = { bind: vi.fn(), }; beforeAll(() => { model = new Model(engine as unknown as IScEngine, cbMgr as unknown as ICallbackManager); model._setModelStructure(modelStructure as unknown as IModelStructure); }); afterEach(() => { modelStructure.getBounding.mockClear(); cbMgr.bind.mockClear(); }); describe('getBimIdConnectedElements', () => { it('should return the expected values', async () => { // implement test }); }); });
Now how to deal with mocking heavy functions? Let’s get back to our previous example with
ModelStructure.getBimIdRelationshipTypes. If we were to use the previous approach we would have to mocktowardInclusionContextand what is necessary forInclusionContext.getRelationships()to provide data. Although it would be feasible there is a more effective approach. This function is made of 2 sections, the first one calls internal and external APIs (towardInclusionContextandInclusionContext.getRelationships()). the second one is the algorithmic logic that we want to test. The Idea here is to split the function into 2 and extract the algorithm into a function that we will test.public getBimIdRelationshipTypes( contextNodeId: RuntimeNodeId, nodeId: BimId, ): RelationshipInfo[] { const contextNode = this.lookupAnyTreeNode(contextNodeId); if (contextNode === null) { return []; } const inclusionContext: InclusionContext = towardInclusionContext(contextNode); return this.$mapBimRelationshipTypes(nodeId, inclusionContext.getRelationships()); } /** @hidden */ public $mapBimRelationshipTypes( nodeId: BimId, relationships: Relationship[], ): RelationshipInfo[] { /* ... */ } The function could have been made private and some tweaking done to access it in the test but it does not really worth the effort so the function is hidden in the doc and prefixed with a ``$`` to note that it is meant for testing purposes.
Now we can either produce the data ourselves but is it also very convenient to get it directly from the real engine. We can get this data by calling the API and logging it in the console then copy paste it in a json file (preferably minify it). Now once done we can use it in the tests:
describe('ModelStructure', () => { let modelStructure: ModelStructure; const mockEngine = {}; const mockView = {}; const mockCallbackManager = {}; let mockCuttingManager: AbstractCuttingManager; let mockModel: IModel; let mockConfig: AssemblyTreeConfig; beforeEach(() => { mockCuttingManager = {} as AbstractCuttingManager; mockModel = {} as IModel; mockConfig = {} as AssemblyTreeConfig; modelStructure = new ModelStructure( mockConfig, mockEngine as any, mockCallbackManager as any, mockCuttingManager, mockModel, ); }); describe('test BIM relationship', () => { let relationships: Relationship[]; beforeAll(async () => { const data = await readFile( path.join(__dirname, '../../specs/data/InclusionContext.getRelationships.min.json'), { encoding: 'utf-8', }, ); relationships = JSON.parse(data); }); it('get BIM connected elements', async () => { const nodeRelationships = modelStructure.$mapBimRelationshipTypes('17321', relationships); const toTest = nodeRelationships.find( (current) => current.type === RelationshipType.ContainedInSpatialStructure, ); expect(toTest?.relateds).toEqual([]); expect(toTest?.relatings).toEqual(['13']); }); // ... }); });
Has you can see it does not require much work to setup these tests and they are very efficient. For example testing the bim relationship with unit test takes a few milliseconds while the exact same tests on storybook takes up to 15s or get timed out. This does not mean that storybook is bad or that we should avoid it. Storybook is an incredibly efficient tool to test the WASM and its integration in our JS library but we can ensure that the JS part of the product is stable and works as expected efficiently without integrating the WASM.
Functional tests
Functional tests are done with storybook in the
packages/web-viewer-componentsproject.Tests use models located in the communicator_datasets repo. You need a local version of this repo. To tell storybook where is the datasets repo, you need to set the
STORYBOOK_DATASETS_PATHenvironment variable. Or you can directly modify the value inpackages/web-viewer-components/.storybook/main.ts.You can find sources at
packages/web-viewer-components/src/stories.To launch tests, you need to run the storybook dev server:
npx nx run web-viewer-components:storybook
Then, you can directly browse the storybook frontend at http://localhost:4400 or launch tests through command line:
Run all functional tests as it’s done in ci:
npx nx run web-viewer-components:test-storybook:ci
Run specific tests, path can be a folder or a test file directly, related to the
packages/web-viewer-componentsfolder:npx nx run web-viewer-components:test-storybook:development <path>
UI components
Both
packages/ui-kitandpackages/web-viewer-componentshave a storybook and use Vitest for unit tests.Storybook
Run the storybook:
npx nx run ui-kit:storybook npx nx run web-viewer-components:storybook
Tests use models located in the communicator_datasets repo like Webviewer storybook tests do. The path is resolved with the
STORYBOOK_DATASETS_PATHenvironment variable.The environment variable can be set directly in the console or by the use of an env file in
packages/web-viewer-components/.storybookandpackages/ui-kit/.storybookfolders. Just copyexample.storybook.envand renamestorybook.envand set your correct path in it.You can browse http://localhost:4400/ or use the test runner. Path to file and / or tags filter can be passed in the command:
npx nx run ui-kit:test-storybook [<path relative to packages/ui-kit>] [--includeTags="<tags>"] npx nx run web-viewer-components:test-storybook [<path relative to packages/web-viewer-components>] [--includeTags="<tags>"]
Vitest
npx nx run ui-kit:test-coverage [-- -t "<test name>"] [--testFiles="<filename>"] npx nx run web-viewer-components:test-coverage [-- -t "<test name>"] [--testFiles="<filename>"]
UI demo
More sophisticated tests are done directly in the demo context with Playwright to verify UI components in an integrated environment.
CI like:
npx concurrently -k -s first -n "SERVER,MODELS,TEST" -c "magenta,cyan,blue" "npx nx run demo:preview" "npx nx run :serve-models:ci --hoops_models_path=<FUNCTIONAL-TESTS folder>" "npx nx run demo-e2e:e2e-w-wait:ci"
That will:
- Launch the demo preview.
- Launch a small http server for models in communicator_datasets.
- Launch Playwright tests.
Dev:
Instead of the demo preview, you can run the dev server, both serve on port 4200.
If you have models in the public directory, you can skip the small http server and tell Playwright to browse models directly at the same location with the env variable
HOOPS_MODELS_URL.In this case, you just have to run dev 2e2, filter with –grep option:
npx nx run demo-e2e:e2e [--grep <tag or name>]
Documentation
The generation of the documentation is a 3 steps process:
- Typedoc API generation as JSON,
- Convert the JSON API files into RST pages,
- Build the product documentation with the RST API files,
Task #2 and #3 depend on python (v3.11+) so you will need to install the dependencies listed into The Documentation Requirements file.
It is recommended that you use a virtual environment to install these dependencies in order not to pollute your machine. To create the virtual environment you can run the following command:python -m venv .venv Then you can activate the environment as follow:
source ./venv/bin/activate Finally you can install the dependencies
python -m pip install -r ../../documentation/requirements.txt
In order to build the documentation you can simply type the following command:
npm run doc
This command will run the entire Documentation pipeline and the documentation will be ready to release at the end of the command.
If you want to generate the API reference RST files you can run the following command:
npm run api-ref Note that this command will clean the api ref directories in the documentation folder and do a clean build.
Finally if you want to only produce the API json files you can do it through this command:
npm run typedoc