Stephen Scott, BridgeTower Media Newswires //February 16, 2026//
Stephen Scott, BridgeTower Media Newswires //February 16, 2026//
For the past year, AI regulation has looked a lot like a group chat with no admin. States are firing off their own rules, Congress is typing and deleting, and employers are left scrolling and wondering which message really matters. The Trump administration is now trying to end that chaos with a sweeping executive order aimed at reining in state-level AI regulation.
On paper, the goal is simplicity: one national framework instead of 50 different ones. In practice, it’s a much heavier lift. The order pushes federal agencies to challenge state laws and even hints at tying compliance to federal funding. That approach may appeal to businesses craving predictability, but it also tees up serious political resistance and legal questions. And those questions are likely to determine whether this order reshapes AI governance — or just becomes another opening gambit in a long fight between President Trump and certain states. Outlined below is a quick summary of what the executive order is trying to do and what steps employers should take.
Goals of the executive order
It is important to remember that all current and pending state and local AI laws remain enforceable — at least until a court blocks them through injunction or Congress passes a federal law that preempts the others. At this stage, the goals of the executive order are to:
Employer’s action items
As in all things employment law, it is better to be proactive than reactive. To that end, here are some steps for employers to put them in the best position to respond quickly should Congress choose to act.
Stay the course on state-law compliance. Don’t be an ostrich; comply with all state and local AI laws.
Build (or update) an internal inventory of AI tools. Pay particular attention to tools used in hiring, promotion, monitoring, scheduling, productivity scoring, sentiment or voice analysis, safety prediction, and other high-stakes employment decisions.
Strengthen AI governance and documentation. Even if a federal standard ultimately emerges, employers will benefit from data-retention plans, bias testing protocols, human-in-the-loop controls, clear vendor documentation, and risk assessments for sensitive use cases.
Review and update vendor contracts. Make sure agreements allow flexibility if state laws remain in force or a new federal framework takes shape.
Watch the federal timeline — and Congress. Key near-term milestones include the DOJ AI Task Force, the Commerce Department’s “onerous law” list, and the Broadband Equity Access and Deployment policy notice, all expected in early 2026. Congressional action, meanwhile, remains uncertain.
For now, this is not the moment to mute notifications. The federal government may be trying to take control of the chat, but the states are still actively posting, enforcing, and setting the rules employers must follow. Until a court kicks someone out of the thread — or Congress finally pins a message to the top — the smartest move is to keep up with every conversation that matters. That means complying with state laws, tightening AI governance, and making sure your vendors can pivot when the tone inevitably changes. The next few months will bring louder messages from the federal government, but they won’t end the debate. Like most group chats, this one isn’t going quiet anytime soon.
Stephen Scott is a partner in the office of Fisher Phillips, a national firm dedicated to representing employers’ interests in all aspects of workplace law. Contact him at 503-205-8094 or [email protected].
The opinions, beliefs and viewpoints expressed in the preceding commentary are those of the author and do not necessarily reflect the opinions, beliefs and viewpoints of the Central Penn Business Journal or its editors. Neither the author nor CPBJ guarantees the accuracy or completeness of any information published herein.