Like with hyper-parameters, performance of machine learning models also depends on their structure (also: configuration or architecture). By using automated tools for this task, a larger number of options can be systematically evaluated, resulting in better performance than when manually trying different configurations.
Through the use of programming by optimisation (PbO) more configuration options might be exposed. For instance in the case of neural architecture search (NAS) you could only optimise the number of layers, but by also exposing the type of layers (e.g. convolutional, pooling, etc.) more and possibly better structures become available.
As with any ML method, automatically configured models should be carefully assessed by team members, and also undergo peer-review before being used in a production environment.