Core Concepts
The author introduces the concept of training a TrojanNet within a carrier network for distinct tasks, emphasizing shared parameters but no feature sharing. The approach involves optimizing task losses for public and secret datasets simultaneously.
Abstract
Training a TrojanNet within a carrier network for distinct tasks involves shared parameters but no feature sharing, optimizing task losses for public and secret datasets simultaneously. The gradient descent on weights allows optimization of the loss function, with differentiation through the permutation operator enabling training of multiple tasks with permutations.
Stats
L = 1/B ∑ i=1B Lpublic(h(xi), yi) + 1/B' ∑ i=1B' Lsecret(hπ(x~i), y~i)
∂L/∂w = ∂Lpublic/∂w + (∂Lsecret/∂wπ)π^-1