US Regional Employment 2022

The new US Regional Employment 2022 provides the latest legal information on the impact of COVID-19 on the workplace, the “Black Lives Matter” and “Me Too” movements, union membership, the National Labor Relations Board (NLRB), the Great Resignation, discrimination and harassment issues, workplace safety, and class actions.

Last Updated: September 27, 2022

Compare law and practice by selecting locations and topic(s)

Select Locations

Select Topic(s)

{{topic.Title}}

Please select at least one location and one topic to use the compare functionality.

Compare

Authors



Bond, Schoeneck & King, PLLC has a labor and employment practice comprising 90 attorneys across the firm’s offices. Key offices providing legal services across New York include Syracuse, Albany, Buffalo, Garden City, New York City and Rochester. The firm represents management exclusively, both in the private and public sectors and in union and non-union settings, and provides legal representation, advice and counsel related to wage and hour and benefits issues, labor and employment-related litigation, collective bargaining, the administration of collective bargaining agreements, and grievance and arbitration proceedings. The firm also assists with the development of employment-related policies, procedures and handbooks, and provides guidance to HR and business unit managers on the many different laws encompassing employees’ rights. Bond, Schoeneck & King, PLLC’s industry experience runs the gamut, including manufacturing, higher education, healthcare, construction, transportation, financial services, retail, telecommunications, municipalities and school districts, energy, agriculture, technology, insurance, defense and government contractors, hospitality and food service.


Overview – US Regional Employment 2022

Employment law in the private sector, as with any area of the law, necessarily begins with definition. At this stage in the evolution of US labor and employment history, one might expect the fundamental concepts and definitions to be fairly well settled. To many, however, the picture at times has been somewhat unsettled. Owing to the impact of COVID-19, these concerns have been magnified in ways that hardly could have been contemplated.

From a positive point of view, this is arguably attributable to the need and willingness of a vibrant economy to adapt to the changes in the global socioeconomic, political and technological climate. At the same time, alternative solutions and the impact of COVID-19 have created not only new challenges but, arguably, unintended consequences as well.

A global entity seeking to establish or enhance its presence in the USA – or, more specifically, in a given US region (or regions) – will encounter these issues in the context of what is often referred to as today’s “changing workplace”. No matter the nature of the entity, the workplace essentially remains the focus of any dispute. However, even prior to the advent of COVID-19, the issues, relationships and considerations in the modern workplace had been undergoing a redefinition – and, indeed, expansion or delimitation – to the point where what traditionally was viewed as the “workplace” in certain circumstances is no longer what it once was. As will be seen in the analyses below, the uncertainty caused by COVID-19's impact cannot be overstated at this point.

The ever-increasing use of social media and the evolution of its more sophisticated and complex vehicles have transformed the workplace, including (or otherwise affecting) recipients not previously considered and invoking entitlements and/or restrictions not previously recognized. The same applies to the impact upon the workplace of the “gig” economy, cyberspace, AI, analytics, and other technological inroads and advances.

The "Me Too", "Black Lives Matter", pay equity, whistle-blowing and other related movements have made their own impact by highlighting their own issues, whether it be in areas of alleged sexual or other harassment, discrimination, retaliation, the question of implicit bias, workplace and product safety, or financial and business misconduct.

All of this has occurred in the face of what had been a steady decline in the unionization of the private sector workplace, accompanied by efforts to expand the scope of the protections and restrictions of labor laws to both unorganized and organized employees.

As a result of these ongoing developments, the political parties, federal, state and local governments, their agencies, the courts, and arbitral and other forums sometimes appear to be grappling in one way or another to fit the proverbial square peg in a round hole. Struggles surround what some might have considered relatively time-tested meanings of terms such as “employee”, “independent contractor”, “supervisor”, “exempt” status, “joint employer”, “franchisor”/”franchisee”, “successor”, “alter ego”, “ally”, a “primary”, a “secondary”, “civility” (and related considerations), “privacy” and expectations of “privacy,” “confidentiality” and “non-disclosure”. Further controversy concerns fundamental notions of “due process” and efforts arguably to evade wage, overtime, pension, other benefits and/or other financial obligations via alleged misclassifications.

Compounding these considerations are the overlapping, conflicting definitions and applications of terms and concepts at the very heart of the workplace disputes brought before the differing forums each US region provides for such dispute resolution. A definition or application in one forum – or under one constitutional provision or statute or regulation, or contract – may well vary from the definition or application accorded that term in another forum or under another statute, regulation or contract. Examples include the interplay between the First Amendment of the US Constitution and a key provision of the National Labor Relations Act (NLRA), or between Title VII of the Civil Rights Act and that same key provision of the NLRA.

Such issues may arise at a jurisdictional level, in the course of the discovery process, or at a procedural or substantive level. They may relate to the viability of a class or collective action, or its alleged waiver. They may surface in the attempted enforcement of a covenant not to compete or a “no poaching” agreement, whether the individual is otherwise covered by a collective bargaining agreement or not. Indeed, when hiring an individual subject to such restrictions, a global entity may find itself faced with a duty of due diligence to ascertain whether the individual is – or might be – subject to enforceable restrictions that would prevent the individual from fulfilling their duties and responsibilities.

The import of a regional perspective certainly complicates the issues even further. However, if all these developments seems ominous, it also may offer the global entity some insights, possibilities or options that might not be available in another region.

The governance of this nation’s employment law is in so many respects a matter of federal law and, as such, that is the primary focus of this regional guide. Even so, it is vital that global entities understand that the federal laws may be interpreted differently in each region, whether in the decisions of the regional offices of the applicable federal agencies, the region’s federal district courts or its circuit courts of appeal, or ultimately the US Supreme Court in its review of regional conflicts between the federal courts of appeal.

Certain issues, moreover, may well be governed by state or other local law and so, where especially pertinent to the specific needs and interests of a global entity, this regional guide may take note of the interplay of such federal employment law and a region’s state or other local law. This is particularly necessary as each state attempts to tackle the still-evolving impact of COVID-19 on the workplace issues unique to its regions in a manner that is both in conjunction with and independent of the federal government.

COVID-19 has dominated the workplace in 2020, 2021 and early 2022, with shutdowns and reopenings and infection waves and vaccinations affecting every aspect of work life.

Against this backdrop, and in the belief that context matters, this regional guide seeks to provide a picture of the current socioeconomic, political and legal climate in both the US generally and the regions covered herein. Each chapter addresses, within that framework, the alternative approaches and arrangements the global entity will need to consider at its inception when defining and implementing its basic structure, its relationships with those who will be servicing it (whether in a non-union, union or potential union setting), and the import of such decisions.

These issues are complicated all the more by the dramatic impact, financially and operationally, of COVID-19, ranging from real estate considerations to remote and other staffing and working arrangements and relationships, including potential health, safety and liability exposure. This guide emphasizes the significance of the interviewing process, with regard to both the possibilities the process offers and the legal and practical constraints US laws may impose.

Among other legal developments this regional guide describes are those terms and conditions that may be of particular (if not crucial) importance to the global entity and its decision as to where in the USA it might wish to establish or enhance its presence. These items include restrictive covenants against competition, solicitation or poaching (when enforceable and to what extent), confidentiality, trade secrets, benefits considerations, analytics, AI and other privacy issues, workplace safety, immigration and related foreign workers' issues.

The guide covers issues and developments in the areas of discrimination, harassment and retaliation, with regard to both pertinent safeguards and restrictions. Also discussed are key issues relative to the termination of the employment relationship that should be addressed, whether at the outset of the relationship or at the time of the termination.

Most importantly, this regional guide also emphasizes what a global entity needs to understand about the types of disputes that may arise in the aftermath of COVID-19, including:

  • the different internal and external ADR forums and other forums in which such disputes might be heard;
  • the options available to the entity either in anticipation of such disputes or once they arise;
  • the types of remedies the entity might seek or to which it might be exposed; and
  • whether there are any extraterritorial applications of the law that should or must be taken into consideration.

The issues are real and cannot be ignored, but the choices and opportunities are many. That said, the authors wish to acknowledge the invaluable contributions of colleagues who, during the preparation of this introductory chapter, helped to identify those issues and then highlight those choices and opportunities.

Trends and Developments

This section focuses on some of the most recent US regional employment trends of which global entities should be aware. The most obvious – and concerning – development since 2020 has been the impact and uncertainty of the COVID-19 pandemic. However, as that permeates almost each and every chapter, it will be addressed throughout the guide, rather than here.

Developments in two particular intersecting areas are of such critical importance that they warrant special attention in this introduction. Perhaps surprisingly at this early stage in their evolution, both involve the ability to even define – much less measure and establish – certain elements of discrimination. The first development involves AI and the implications of its increased introduction of analytics to the hiring process, whereas the second concerns what has been termed “implicit bias”.

Algorithms and analytics: the potentially discriminatory role of AI in the employment process

Without doubt, the use of AI raises some obvious concerns about job displacement, both globally and in the USA. None of that displacement, short-term or otherwise, should be minimized – particularly with regard to the impact on those least able to cope with it. How and the extent to which that displacement is addressed will present its own challenges, problems and solutions, and will require innovative and effective education and training programs both externally and internally.

What is clear, and hopefully somewhat encouraging, is a heightened awareness on the part of US business and educational institutions of the roles they can – and must – play in the development and implementation of such education and training programs. The benefits these programs can bring both to the recipients and to the institutions themselves, in terms of diversity and inclusion, are also becoming clear. Indeed, more employers today have not only adopted diversity goals, but are incentivizing those involved in the hiring process to meet such goals – which is where AI comes in.

Some commentators have already voiced concerns about AI's “revolutionary” incorporation of analytics (and the algorithms they entail) into the employment process in ways that few could have foreseen. These algorithms are being used to predict behavior, working traits, qualifications or future performance more and more in testing and other stages of the hiring, job placement and promotional processes, such as:

  • the formulation of job descriptions and responsibilities in job postings;
  • the reviewing of resumes; and
  • the screening of video interviews.

This is ostensibly to promote diversity and inclusion or otherwise mitigate against the possibility of unlawful bias – possibly in response to the pay equity movement and others, such as "Me Too" and "Time's Up".

Although potentially productive and generally adopted in good faith, the introduction of these algorithms may be problematic in certain material respects – if not outright questionable. Good intentions notwithstanding, the reliability of such algorithms as predictors of behavior or in ascertaining the motivations of decision-makers is far from clear. This remains very much the subject of ongoing challenge and debate among the social psychologists who helped develop the concept and the legal community involved in its application. The latter includes the US judicial system.

Algorithms are dependent upon the decisions made when choosing, collecting and coding the information that will predicate which models to use and then formulate these models. Actual or potential abuses of AI, however, have manifested themselves in various contexts, including:

  • the use of technological “data points” as measurements of reaction times without considering the relevance of such reaction times to the actions or decisions in question;
  • non-verbal cues in the form of facial expressions;
  • the substitution of presumed objectivity for human assessments in the decision-making process (eg, eye and body movements, voice nuances and clothing), in order to “see” what humans presumably cannot see (whether due to conscious or unconscious bias);
  • the postings of employment opportunities via social media platforms that may have been selected (unwittingly or otherwise) on the basis of target audiences that fail to include representative numbers of women, the aged, minorities, the disabled and/or other protected categories of potential candidates; and
  • large datasets that, perhaps also unwittingly, disproportionately omit such protected classes.

Even when the analytics indicate a certain linkage, questions have been posed as to whether the linkage is merely one of correlation (and not, in fact, causation). The ability of the data points selected and the models created even to measure what is being determined is also in question.

In short, the fear is that the introduction of a presumably neutral model to avoid conscious or unconscious biases may well result in a subjective process of its own, which unintentionally reflects the very same bias or other biases that the analytics were designed to minimize (if not avoid).

In the words of David Lopez, the former and longest-serving general counsel of the US Equal Employment Opportunity Commission (EEOC), “bad data inputs lead to bad results”. Testifying in a hearing before a congressional subcommittee of the US House Committee on Energy and Commerce on March 4, 2019, Lopez said: “These digital tools present an even greater potential for misuse if they lock in and exacerbate our country’s longstanding disparities based on race, gender and other characteristics” (Inclusion in Tech: How Diversity Benefits All Americans).

By citing “mishaps,” “abuses” and even “horrors”, David Lopez highlighted the need to "examine algorithms and big data in the context of their effects on society and the need to have a framework in place that supports its ethical and just use”. He offered an abundance of “cautionary tales [...] about the failure of predictive analytics to live up to our ideals of nondiscrimination, opportunity and privacy” and spoke of the need for a “better under[standing] and increased scrutiny of outcomes” in light of the relatively newfound “prominence of predictive analytics and algorithms in decision-making and other aspects of society”. As he himself characterized it, an alarming number of mishaps in employment screening emanated from “the elevation of statistical correlation between some variable and purported job performance, qualifications or qualities”.

Clearly, David Lopez’s research indicated a genuine concern about the reliability of these analytics as the predictors they were promised to be. If anything, he believes that their introduction has exacerbated – rather than ameliorated – the problem of discrimination. Again, in his own words, “algorithms are often predicated on data that amplifies rather than reduces the already present biases in society – racial, ethnic and socioeconomic – in part because these issues may not be noticed or a consideration to the people creating the technology”. He pointed out that subjective judgments are made and that “with those judgments come the innate biases of the individuals making the decisions”.

Implicit bias: the part it plays in discrimination

When analyzing his own reservations about the reliability of AI and its algorithms as a predictor of discriminatory behavior, David Lopez observed: “Despite many large tech companies actively trying to increase the diversity of their workforce, there are still factors at play leading to suboptimal results that need to be discovered and ameliorated. One of these issues is likely implicit bias in the hiring and employment context.”

What is implicit bias?

According to social psychologists, each person is subject to their own inherent “unconscious” or “indirect” biases that – even though devoid of conscious intent – are nonetheless probative of discriminatory behaviors. What David Lopez called the "science of implicit bias", in his testimony before the House Subcommittee on Consumer Protection and Commerce, is predicated upon “the more subtle [and] automatic association of stereotypes or [subjective] attitudes about particular groups”. After all, he continued “people can have conscious values that are still betrayed by their implicit biases”.

Notwithstanding his aforementioned real and serious concerns about the reliability of predictive analytics and algorithms, David Lopez somehow assumed that – however unconscious they may be – “implicit biases are frequently better at predicting discriminatory behaviors than people’s conscious values and intentions”. Within both current thinking in the social science community and the legal framework in which the issue of implicit bias arises, his assumption is very much the subject of ongoing debate.

Disparate treatment and disparate impact

The US Supreme Court has made it clear that reliance upon stereotypical and subjective assumptions or judgments can occur in two distinct and (from a legal standpoint) crucially different contexts – one of “disparate treatment” and one of “disparate impact” (Employment Discrimination Law, American Bar Association Section of Labor and Employment Law, Fifth Edition, Volume I, Chapter 3.I). The distinction is especially pertinent to the issues of conscious and unconscious bias.

i) Disparate treatment

A claim of disparate treatment, by definition, is one asserted in the context of intentional discrimination. Accordingly, a burden of proof is imposed upon the claimant that requires actual evidence of the employer’s intent to discriminate against the claimant based on race, religion, gender, age, disability or other protected legal status. “The ultimate question in every [disparate treatment] employment case is whether the plaintiff was the victim of intentional discrimination” (Reeves v Sanderson, Plumbing Prods, Inc, 530 US 133, 153 (2000)). “Proof of discriminatory motivation in such cases is critical” (Teamsters v United States, 324, 335, no 15 (1977)).

ii) Disparate impact

A claim of disparate impact, unlike disparate treatment, is one asserted not on the basis of intent, but rather upon the impact of the decision or action in question. As such, the claimant’s burden of proof is in no way predicated upon the presence of intent. More precisely, the issue posed is whether the consequences of a policy, action or decision – otherwise neutral at face value but measured by statistically significant criteria or by other objective means – are such that the claimant has been adversely affected in a way that cannot be explained other than by the discriminatory impact of the policy, action or decision. “Good intent or absence of discriminatory intent does not redeem employment procedures or testing mechanisms that operate as ‘built-in headwinds’” (Griggs v Duke Power Co, 401 US 424, 432 (1971)).

Conscious or unconscious? What David Lopez’s observations mean for allegations of bias

The gravity of David Lopez's concerns about the accuracy or reliability of analytics and algorithms as predictors of discriminatory behaviors or motivations cannot be overstated. He based these concerns on a number of factors common to both conscious and unconscious bias claims, including:

  • the “tendency of search results themselves to reflect stereotypes and bias”;
  • inaccuracies in “statistical correlations” drawn; and
  • the reality that the people creating the technology might themselves even:
    1. not notice the problems; or
    2. produce results reflective of their own individual “subjective judgments” and “innate biases”.

David Lopez’s misgivings about the questionable or misplaced reliance upon stereotypical assumptions and subjective judgments made clear he was referring both to claims of conscious and unconscious (implicit) bias. Indeed, he defined the science of implicit bias as simply "the more subtle” form of automatically associating such stereotypes or attitudes with particular groups.

If, by his own assessment, the predictive reliability of analytics and their algorithms is questionable when it comes to assessing the conscious behaviors and motivations at issue in a disparate treatment claim, then one might expect a similarly guarded assessment when attempting the more difficult task of unmasking or betraying those supposedly bona fide conscious values and intentions to reveal an asserted unconscious (implicit) bias. On the contrary, however, David Lopez concludes that these same analytics and algorithms – when used to assess unconscious or implicit bias – “are frequently better at predicting discriminatory behaviors than people’s conscious values and intentions”.

On what basis David Lopez reaches this conclusion, and how the predictive determinations of which he speaks will be made or measured, remains to be seen. Even for those who were heavily involved in the creation and development of the concept of implicit bias, there is still very much a state of flux surrounding:

  • its nature and definition;
  • the assumptions upon which it is based;
  • how and under what circumstances it can or cannot be measured; and
  • its reliability as a predictor of behavior.

Further questions concerning implicit bias

Whether David Lopez’s more optimistic assessment of implicit bias as a better predictor of discriminatory behavior will prove valid depends on how issues such as the following are addressed as the concept evolves.

Does implicit bias equal intent?

A claim of implicit bias substitutes an assumed stereotypical or subjective bias against or about another group for the evidence of intent US law requires in a case of disparate treatment. In Price Waterhouse v Hopkins (490 US 228 (1989)), however, the US Supreme Court indicated that the presence of such a stereotypical assumption cannot ‒ in and of itself  ‒ establish the requisite discriminatory intent. Rather, the test is whether the evidence establishes that, in the situation at hand, the accused in fact “act[ed] on the basis of the stereotypical assumption in question” – in this case, “on the basis of a belief that a woman cannot be aggressive, or that she must not be” (id at 250).

How does one measure an unconscious bias?

Without discussing the specifics, suffice it to say that the tests generally used to measure unconscious bias focus on assumptions based upon testing other members of the same generic class (eg, same race or gender) but not the employment entity in question. Generally, the proffered experts have not even met – much less tested – the accused individual(s) and often have not even examined the deposition or other recorded evidence.

What is often tested, moreover, are the millisecond reactions of these sample individuals to certain situations. Crucially, the nature of the tests in no way mirrors or otherwise reflects the manner or length of time in which the alleged discriminatory decisions or actions in question occurred.

Decisions as to hiring, termination or lesser discipline, promotion, assignment, compensation or the like are generally deliberative and often collaborative. They are simply not made in a matter of the milliseconds measured by the tests in question.

What does research on tests for implicit bias show?

Of late, there has been a radical change in the thinking of many in the social science community regarding the implicit association test (IAT)'s ability to serve as a predictor of the unconscious bias such tests were designed to establish. This change began with those who created the IAT, which remains the type of test most generally used.

The now-modified public position of Tony Greenwald, Mahzarin Banaji and Brian Nosek, as featured on a website that promotes research based upon their testing, stresses that:

  • "the IAT should not be used to make decisions about others, to measure somebody else’s automatic racial preference, or to decide whether an individual should or should not serve on a jury";
  • "using the IAT as the basis for making significant decisions about self or others could lead to undesired and unjustified consequences”; and
  • “attempts to diagnostically use such measures for individuals risk undesirably high rates of erroneous classification”.

Consider, as well, the comments of IAT creator Dr Greenwald and his colleague, Calvin K. Lai, in their paper Implicit Social Cognition: "In the [p]ast 20 years, research on implicit social cognition has established that social judgments and behavior are guided by attitudes and stereotypes of which the actor may lack awareness. Research using the methods of implicit social cognition has produced the concept of implicit bias, which has generated wide attention not only in social, clinical and developmental psychology, but also in disciplines outside of psychology, including business, law, criminal justice, medicine, education and political science. Although this rapidly growing body of research offers prospects of useful societal application, the theory needed to confidently guide those applications remains insufficiently developed" (Annual Review of Psychology 2020; 71: 419—45).

These and other quotes and excerpts further emphasize, by way of example, the basis for what appears to be an emerging consensus among many social psychologists that:

  • an individual’s score on the tests in question is not a reliable predictor of that individual’s likelihood of engaging in discriminatory behavior;
  • a fairly large number of studies do not support the conclusion that a group of persons showing higher bias in measures of implicit bias are more likely to discriminate than the group of persons showing lower bias on such measures; and
  • if anything, further research is needed to examine the possible accumulative effects of implicit bias on employment outcomes.

Many studies, in fact, do not find a positive correlation between implicit bias and discriminatory behavior, even when looking at aggregate data; instead, they indicate the very opposite behavior that implicit bias data would have predicted.

What if, as research continues to evolve, the social psychologists and authors of the IAT themselves cannot confirm the reliability of the IAT or other tests in measuring or otherwise predicting an individual’s behavior – and, indeed, caution against its use for such purposes, even when compared with aggregate data? It remains to be seen whether the continued use of the IAT as a basis for social framework “evidence” will be regarded instead as an improper substitution of one stereotype for another – particularly in cases where the accused individual has not even taken the test in question and no such proffer is made.

Bearing these developments in mind, Michael Selmi's The Paradox of Implicit Bias and a Plea for a New Narrative (2017) notes behavior that often is labeled as “implied” could “just as easily be described as 'explicit'”. There, moreover, Professor Selmi urges a “move away from a focus on the unconscious, and the IAT, to concentrate instead on field studies that document discrimination in real world settings". He states that the idea of defining implicit bias as “unconscious, pervasive and beyond one’s control" is a message that can be "difficult to reconcile with our governing legal standards, which often turn on one’s ability to control one’s behavior” and “is difficult to square with traditional notions of legal proof”. Selmi further notes that implicit bias “has its greatest effect on spontaneous decisions but plays a lesser role in deliberative decisions” and is “most commonly identified with the controversial disparate impact theory where proof of intent is not required”. Implicit bias is tied to the IAT, he cautions, and that test “has limited predictive ability”.

Regulating the use of AI in the hiring process: US regional legislation

Against this backdrop, the New York City Council passed a bill in November 2021 that regulates employers' and employment agencies’ use of “automated employment decision tools” in making employment decisions, effective as of January 1, 2023. This new local law is part of a growing trend towards examining and regulating the use of AI in hiring, promotional and other employment decisions.

Specifically, the new law prohibits an employer or employment agency from using an automated employment decision tool in making an employment decision unless the following requirements are met:

  • the tool has been subject to a bias audit within the last year; and
  • a summary of the results of the tool's most recent bias audit and distribution data have been made publicly available on the employer or employment agency’s website.

A “bias audit” is defined as “an impartial evaluation by an independent auditor” that includes “the testing of an automated employment decision tool to assess the tool’s disparate impact on persons of any Equal Employment Opportunity component 1 category required to be reported by employers pursuant to 42 US Code § 2000e-8(c) and 29 CFR § 1602.7".

Several states and cities have passed or are considering similar laws concerning the use of AI and other technology in employment decisions. Illinois’ Artificial Intelligence Video Interview Act requires employers using AI interview technology to:

  • provide advanced notice and an explanation of the technology to applicants;
  • obtain the applicant’s consent to use the technology; and
  • comply with restrictions on the distribution and retention of videos.

Similarly, Maryland has enacted a law that requires employers to obtain an applicant’s written consent and a waiver prior to using facial recognition technology during pre-employment job interviews. Other states have also proposed or passed similar legislation that would address the use of facial recognition in the employment context.

Additionally, the US Equal Employment Opportunity Commission (EEOC) launched an initiative in 2021 that aims to ensure AI and other technological tools used in making employment decisions comply with the federal civil rights laws. As part of its initiative, the EEOC will gather information about the adoption, design and impact of employment-related technologies. The EEOC will also issue technical assistance to provide employers with guidance on algorithmic fairness and the use of AI in employment decisions.

How the future narrative will unfold remains to be seen.

Authors



Bond, Schoeneck & King, PLLC has a labor and employment practice comprising 90 attorneys across the firm’s offices. Key offices providing legal services across New York include Syracuse, Albany, Buffalo, Garden City, New York City and Rochester. The firm represents management exclusively, both in the private and public sectors and in union and non-union settings, and provides legal representation, advice and counsel related to wage and hour and benefits issues, labor and employment-related litigation, collective bargaining, the administration of collective bargaining agreements, and grievance and arbitration proceedings. The firm also assists with the development of employment-related policies, procedures and handbooks, and provides guidance to HR and business unit managers on the many different laws encompassing employees’ rights. Bond, Schoeneck & King, PLLC’s industry experience runs the gamut, including manufacturing, higher education, healthcare, construction, transportation, financial services, retail, telecommunications, municipalities and school districts, energy, agriculture, technology, insurance, defense and government contractors, hospitality and food service.