It should just work if you have categorical variables, as those will just show up as their own category.
To frame the answers for the rest of your question, there are two types of models used for the side information depending on whether sidedatafactorization is True or not at model creation. If sidedatafactorization is off, it fits all of the side information using a linear model, which then causes each category or term in the side data to weight the answer positively or negatively depending on what fits the data the best. Generally speaking, most of the power of this model is in the latent factors associated with the users and items, and the side terms give slight adjustments.
You can see the weighting in the side data terms with the side data entry of m.get("coefficients"). The linear terms give these weights, with larger magnitude weights having a more significant effect.
If side data factorization is on, then the effect of the side features is more difficult to interpret. In this case, they also interact with the user and item latent factors, and as a result, it isn't really possible to say how much of an effect the user/item tags have on the model without looking at the associated user and item factors as well.
The model I recommend for this is to use the linear model for the side features (so sidedatafactorization = False), but then to take numeric features and bin them using one of the feature transforms. This tends to give a good model and then also have interpretable side coefficients.