Skip to main content
Version: 0.21

Optimizations & Best Practices

Using smart pointers effectively

Note: if you're unsure about some of the terms used in this section, the Rust Book has a useful chapter about smart pointers.

To avoid cloning large amounts of data to create props when re-rendering, we can use smart pointers to only clone a reference to the data instead of the data itself. If you pass references to the relevant data in your props and child components instead of the actual data you can avoid cloning any data until you need to modify it in the child component, where you can use Rc::make_mut to clone and obtain a mutable reference to the data you want to alter.

This brings further benefits in Component::changed when working out whether prop changes require the component to re-render. This is because instead of comparing the value of the data the underlying pointer addresses (i.e. the position in a machine's memory where the data is stored) can instead be compared; if two pointers point to the same data then the value of the data they point to must be the same. Note that the inverse might not be true! Even if two pointer addresses differ the underlying data might still be the same - in this case you should compare the underlying data.

To do this comparison you'll need to use Rc::ptr_eq instead of just using PartialEq (which is automatically used when comparing data using the equality operator ==). The Rust documentation has more details about Rc::ptr_eq.

This optimization is most useful for data types that don't implement Copy. If you can copy your data cheaply, then it isn't worth putting it behind a smart pointer. For structures that can be data-heavy like Vecs, HashMaps, and Strings using smart pointers is likely to bring performance improvements.

This optimization works best if the values are never updated by the children, and even better if they are rarely updated by parents. This makes Rc<_>s a good choice for wrapping property values in pure components.

However, it must be noted that unless you need to clone the data yourself in the child component, this optimization is not only useless, but it also adds the unnecessary cost of reference counting. Props in Yew are already reference counted and no data clones occur internally.

View functions

For code readability reasons, it often makes sense to migrate sections of html! to their own functions. Not only does this make your code more readable because it reduces the amount of indentation present, it also encourages good design patterns – particularly around building composable applications because these functions can be called in multiple places which reduces the amount of code that has to be written.

Pure Components

Pure components are components that don't mutate their state, only displaying content and propagating messages up to normal, mutable components. They differ from view functions in that they can be used from within the html! macro using the component syntax (<SomePureComponent />) instead of expression syntax ({some_view_function()}), and that depending on their implementation, they can be memoized (this means that once a function is called its value is "saved" so that if it's called with the same arguments more than once it doesn't have to recompute its value and can just return the saved value from the first function call) - preventing re-renders for identical props. Yew compares the props internally and so the UI is only re-rendered if the props change.

Reducing compile time using workspaces

Arguably, the largest drawback to using Yew is the long time it takes to compile Yew apps. The time taken to compile a project seems to be related to the quantity of code passed to the html! macro. This tends to not be much of an issue for smaller projects, but for larger applications, it makes sense to split code across multiple crates to minimize the amount of work the compiler has to do for each change made to the application.

One possible approach is to make your main crate handle routing/page selection, and then make a different crate for each page, where each page could be a different component or just a big function that produces Html. Code that is shared between the crates containing different parts of the application could be stored in a separate crate which the project depends on. In the best-case scenario, you go from rebuilding all of your code on each compile to rebuilding only the main crate, and one of your page crates. In the worst case, where you edit something in the "common" crate, you will be right back to where you started: compiling all code that depends on that commonly shared crate, which is probably everything else.

If your main crate is too heavyweight, or you want to rapidly iterate on a deeply nested page (eg. a page that renders on top of another page), you can use an example crate to create a simplified implementation of the main page and additionally render the component you are working on.

Reducing binary sizes

  • optimize Rust code
  • cargo.toml ( defining release profile )
  • optimize wasm code using wasm-opt

Note: more information about reducing binary sizes can be found in the Rust Wasm Book.

Cargo.toml

It is possible to configure release builds to be smaller using the available settings in the [profile.release] section of your Cargo.toml.

[profile.release]
# less code to include into binary
panic = 'abort'
# optimization over all codebase ( better optimization, slower build )
codegen-units = 1
# optimization for size ( more aggressive )
opt-level = 'z'
# optimization for size
# opt-level = 's'
# link time optimization using using whole-program analysis
lto = true

Nightly Cargo configuration

You can also gain additional benefits from experimental nightly features of rust and cargo. To use the nightly toolchain with trunk, set the RUSTUP_TOOLCHAIN="nightly" environment variable. Then, you can configure unstable rustc features in your .cargo/config.toml. Refer to the doc of unstable features, specifically the section about build-std and build-std-features, to understand the configuration.

.cargo/config.toml
[unstable]
# Requires the rust-src component. `rustup +nightly component add rust-src`
build-std = ["std", "panic_abort"]
build-std-features = ["panic_immediate_abort"]
caution

The nightly rust compiler can contain bugs, such as this one, that require occasional attention and tweaking. Use these experimental options with care.

wasm-opt

Further, it is possible to optimize the size of wasm code.

The Rust Wasm Book has a section about reducing the size of Wasm binaries: Shrinking .wasm size

  • using wasm-pack which by default optimizes wasm code in release builds
  • using wasm-opt directly on wasm files.
wasm-opt wasm_bg.wasm -Os -o wasm_bg_opt.wasm

Build size of 'minimal' example in yew/examples/

Note: wasm-pack combines optimization for Rust and Wasm code. wasm-bindgen is used in this example without any Rust size optimization.

used toolsize
wasm-bindgen158KB
wasm-bindgen + wasm-opt -Os116KB
wasm-pack99 KB

Further reading: