Apply the style of one image to another iamge.
The key is to use a trained cnn to separate the content of the iamge.

# Separating Style and Content
As we go deeper into the network, it cares about the content of the image rather than details such as texture of the image.

Style can be thought of texture and curvature. You may use earlier layers in the networl

correlation is a measure of the relationship between two or more variables
# VGG19 and content loss

Content Image is passed throught the feed forward process untill it gets to a conv layer that is deep in the netowrk. That vector/feature map is used as the conrent represntation
When it sees style iamge, it will extract different features from multiple layers.
content loss is a loss that calculates the difference between the content Cc. and target (Tc) image representation

Note: We are only using the VGG network as a feature extracture and not really training hte network
# Gram Matrix
Correlations at each conv layer is given by a gram matrix. First step is to vetorize values of a feature map.
Below we are vetorizing a single feature map.

A Conv layer will contain multiple feature maps. Here we are convering a 3D conv layer to a 2D matrix of values.

The next step is to multiple the vectorized feature maps by its transpose to get the gram matix.
In the below image, cell (4,2) is the correlation between feature map 4 and feature map 2

# Style Image
Mean Squared Distance between Gram Matrix of style image and Gram Matrix of target image.
style loss is a loss that calculates the difference between the image style (Ss) and target (Ts) image style.
a is constant that accounts for the number of values in each layer, w is style weights.
Style is captured as a gram matrix.


Total loss is the sum of content and style loss.

# Loss Weights
alpha beta ratio is ratio between alpha (content weight) and beta (style weight)

Different alpha beta ratio can result in different generated image

if we set the ratio to 1/10,000 we the image just has the style